Friday, July 23, 2021

Ensemble Programming as Idea Integration

This week with Agile 2021 conference, I took in some ideas that I am now processing. Perhaps the most pressing of those ideas was from post-talk questions and answers session with Chris Lucian where I picked up the following details about Hunter ensemble (mob) programming:

  • Ideas of what to implement generally come to their team more ready than what would be my happy place - I like to do a mix of discovery and delivery work and would find myself unhappy with discovery being someone else's work. 
  • Optimizing for flow through a repeatable pattern is a focus: from scenario example to TDD all the way through, and focus on the talk is on habits as skill is both built into a habit and overemphasized in the industry
  • A working day for a full-time ensemble (mob) has one hour of learning in groups, 7 hours of other work split to a timeframe of working in rotations, pausing to retrospect and taking breaks. Friday is a special working day with two hours of learning in groups. 
The learning time puzzled me in particular - it is used on pulling knowledge others have, improving efficiency and looking at new tech. 
A question people seem to ask a lot about Ensemble Programming (and Testing) is if this would be something we do full-time and that is exactly what it is as per accounts from Hunter that originated the practice. Then again, with all the breaks they take, the learning time and the continuous stops for retrospective conversations, is that full time? Well, it definitely fills the day and sets the agenda for people, together. 

This lead me to think about individual contributions and ensembling. I do not come up with my best ideas while in the group. I come up with them when I sit and stare a wall. Or take a shower. Or talk with my people (often other than the colleagues I work with) explaining my experience trying to catch a thought. Best work-related ideas I have are individual reflections that, when feeling welcome, I share with the group. They are born in isolation, fueled by time together with people, and implemented, improved and integrated in collaboration. 

Full 8 hour working days with preset agenda would leave thinking time to my free time. Or making a change in how the time is allocated so that it fits. With so much retrospectives and a focus on kindness, consideration and respect, things do sound like negotiable when one does not fold under group's different opinions. 

I took a moment to rewatch amazing talk by Susan Cain on Introverts. She reminds us: "Being best talker and having best ideas has zero correlation.". However, being the worst talker and having the best ideas also has zero correlation. If you can't communicate your ideas and get others to accept them and even amplify them, your ideas may never see the light of day. This was particularly important lesson for me on Ensemble Programming. I had great ideas as a tester who did not program, but many - most - of my ideas did not see the light of day.

Here's the thing. In most software development efforts, we are not looking for the best ideas absolutely. But it would be great that we don't miss out on the great ideas we have in the people we hired just because we don't know how to work together and hear each other our.

And trust me - everyone has ideas worth listening to. Ideas worth evolving. Ideas that deserve to be heard. People matter and are valuable, and I'd love to see collaboration as value over competitiveness.

Best ideas are not created in ensembles, they are implemented and integrated in ensembles. If you can’t effectively bind together the ideas of multiple people, you won’t get big things done. Collaboration is aligning our individual contributions while optimizing learning so that the individuals in the group can contribute their best. 








Tuesday, July 20, 2021

Mapping the Future in Ensemble Testing

Six years ago when I started experimenting with ensemble testing, one of the key dynamics I set a lot of experiments around was *taking notes* and through that, *modeling the system*. 

At first, I used that notetaking/modeling as a role in an ensemble, rotating in the same cycle as other roles. It was a role in which people were lost. Handing over document that you had not seen and trying to continue from it was harder than other activities, and I quickly concluded the notes/model were something that for an ensemble to stay on common problem, this needed to be shared. 

I also tried a volunteer notetaker who would continuously be describing what the ensemble was learning. I noticed a good notetaker became the facilitator, and generally ended up hijacking control from the rest of the group by pointing out in a nice and unassuming way what was the level of conversation we were having. 

So I arrived at the dynamic I start with now in all ensemble testing sessions. Notetaking/modeling is part of testing, and Hands (driver) will be executing notetaking from the request of the Brains (designated navigator) or Voices (other navigators). Other navigators can also keep their own notes of information to feed in later, but I have come to understand that in a new ensemble, they almost never will, and it works well for me as a facilitator to occasionally make space for people to offload the threads they model inside their heads into the shared visible notes/model. 

Recently I have been experimenting with yet another variation of the dynamic. Instead of notes/model that we share as a group and use Hands to get visible, I've allowed an ensemble to use Mural (post-it wall), on the background to offload their threads with a focus on mapping the future they are not allowed to work on right now because of the ongoing focus. It shows early promise of giving something extra to do for people who are Voices, and using their voice in a way that isn't shouting their ideas on top of what is ongoing but improving something that is shared.

Early observations say that some people like this, but it skews the idea of us all being on this task together and can cause people to find themselves unavailable for the work we are doing now, dwelling in the possible future. 

I could use a control group that ensemble together longer, my groups tend to be formed for a teaching purpose and the dynamics of an established ensemble are very different to the first time ensemble. 

Experimenting continues. 



Wednesday, July 14, 2021

Ensemble Exploratory Testing and Unshared Model of Test Threads

When I first started working on specific adaptations of Ensemble Programming to become Ensemble Testing, I learned that it felt a lot harder to get a good experience on exploratory testing activity than on a programming activity, like test automation creation. When the world of options is completely open, and every single person in the ensemble has their own model of how they test, people need constraints that align them. 

An experienced exploratory tester creates constraints - and some even explain their constraints - in the moment to guide the learning that happens. But what about when our testers are not experienced exploratory testers, nor experienced in explaining their thinking? 

When we explore alone, we start somewhere, and we call that start of a thread. Every test where we learn creates new options and opportunities, and sometimes we  *name a new thread* yet continue on what we were doing, sometimes we *start a new thread*. We build a tree of these threads, choosing which one is active and manage the connections that soon start to resemble a network rather than a tree. This is a model that guides our decisions on what we do next, and when we will say we are done. 

The model of threads is a personal thing we hold in our heads. And when we explore together in ensemble testing, we have two options:

  1. We accept that we have personal models that aren't shared, that could cause friction (hijacking control) 
  2. We create a visual model of our threads
The more we document - and modeling together is documenting - the slower we go. 

I reviewed various ensemble testing sessions I have been facilitating, and noticed an interesting pattern. The ensemble was more comfortable and at ease with their exploratory testing if I first gave them a constraint of producing visible test ideas before exploring. At the same time, they generally found less issues to start conversations on, and held stronger to the premature assumptions they had made of the application under test. 

Over time, it would be good for a group to create constraints that allow for different people to show their natural styles of exploratory testing, to create a style the group shares. 

Wednesday, July 7, 2021

Working with Requirements

Long, long time ago I wrote down a quote that I never manage to remember when I want to write it down. Being on my top-10 of things I go back to, I should remember it by now. Alas, no. 

"If it's your decision to make, it's design. If it's not, it's a requirement." - Alistair Cockburn

Instead, I have no difficulties in recalling the numerous times someone - usually one of the developers - says that something *was not a requirement* is overwhelming. With all these years working to deliver software, I think we hide behind requirements a lot. And I feel we need to reconsider what really is a requirement. 

When our customers ask us of things they want in order to buy our solution, there's a lot of interpretation around their requirements. I have discovered we do best with that interpretation when we get to the *why* behind the *what* they are asking, and even then, things are negotiable much more often that not. 

In the last year, requirements have been my source of discontent, and concern. In the first project we delivered together, we had four requirements and one test case. And a thousand conversations. It was brilliant, and the conversations still pay back today. 

In the second project we delivered together, we had more carefully isolated requirements for various subsystems, but the conversation was significantly more cumbersome. I call it success when 90% of the requirements vanished a week before delivery, while scope of the delivery was better than those requirements let us to believe. 

In another effort in the last year, I have been going through requirements meticulously written down and finding it harder because of the requirements to understand what and why we are building.

Requirements, for most part of them, should be about truly the decisions we can't make. Otherwise, let's focus on designing for value. 


Thursday, July 1, 2021

Learning while Testing

You know how I keep emphasizing that exploratory testing is about learning? Not all testing is, but to really do a good job of exploratory testing, I would expect centering learning. Learning to optimize value of our testing. But what does that mean in practice? Jenna gives us a chance of having a conversation on that with her thought experiment: 

When I first came about Jenna's thought experiment, I was going to pass it. I hate being on the spot with exercises where the exercise designer holds the secret to what you will trip on. But then someone I admire dared to take the challenge on in a way that did not optimize for *speed of learning* and this made me wonder what I would really even respond. 

It Starts with a Model

Reading through the statement, a model starts to form in my head. I have a balance of some sort that limits my ability to withdraw money,  a withdrawal of some sort that describes the action I'm about to perform, an ATM functionalities of some sort, and a day that frames my limitation in time. 

I have no knowledge on what works and what does not, and I don't know *why* it would matter that there is a limit in the first place. 

The First Test

If I first test a positive case - having more than that $300 on my balance, withdrawing $300 expecting I get the cash at hand and then any smallest sum on top of that that the functionalities of the ATM allow for, I would at least know the limit can work. But that significantly limits anything else I can learn then on the first day. 

I would not have learned anything about the requirement though. Or risks, as the other side of the requirement. 

But I could have learned that even the most basic case does not work. That isn't a lot of learning though. 

Seeing Options

To know if I am learning as much as I could, it helps if I see my options. So I draw a mindmap. 

Marking the choices my 1st test would be making makes it clear how I limit my learning about the requirements. Every single branch in itself is a question of whether that type of a thing exists within the requirements, and I would know little other than what was easily made available. 

I could easily adjust my first test in at least giving myself a tour of the functionalities the ATM has before completing my transaction. And covering all ways I can imagine going over after that first transaction getting me to limit would lift some of the limitations I first see as limiting learning over time. 

Making choices on learning

To learn about the requirement, I would like to test things that would teach me around the concepts of what scope the limit pertains to (one user / one account / one account type / one ATM) and what assumptions are built into the idea of having a daily limit with a secondary limit through balance. 

For all we know, the way the requirement is written, it could be ATM specific withdrawal limit! 

Hands on with the application, learning, would show us what frames its behavior fits in, and without time to test first things out, I would just want to walk away at this point.