Sunday, August 30, 2020

Pondering on Requirements

I remember the day on my career when I understood that Requirements was something special. 

I had spent years with software testing where feedback was welcome even if the resulting changes got prioritized to wait, but this project was different.

I could have seen it coming from the advice people were giving on exact requirements traceability, like we were preparing for the war to defend what was rightfully ours. 

I had tested a system to find out it had been built on a version of an open source component that was end of life, with problems that would as the project progressed lead us to a dead end with regards to our ability to react. It had been built in a way where the component change was not straightforward, quite the opposite. I reported this, getting the attention of all of my own organizations management team. We scheduled a meeting with the subcontractor's architect and he played the Requirements card. We had not been specifically saying that putting us into this position with a brand new in progress multi-million system should rely on something different. 

My years with this company were filled with experiences like this. Continuous fights over contractual clauses. Meetings where we would discuss movement of money for yet another newly discovered Requirement like one saying that a machine intended to calculate monetary benefits should calculate at least roughly correctly. No, all of those were our mistakes in the Requirements. 

Years passed, and I learned to choose my work so that instead of focusing on Requirements, we focused on  value and features and progress. Requirements stayed in the role they should be: points of communication, aiming for mutual understanding and benefit to our customers. Useful for testing to know as the rough idea of what we were building, but not the focus or the limit of what there is for that system. 

With agile, we learned that epics and stories were not requirements, they were a new kind of intermix of a mini project plan and something requirements-like. With continuous delivery, we could do small slices at a time, and running tested features supported by test automation and a caring team were our new normal. 

When Requirements card gets played, it is now played to avoid responsibility on one side of the mutual relationship of building something good. It's played to say there needs to be a list and proof of covering all those, because someone expects something they are sure you cannot do without that. The cost of the proof - not just the direct work but the impact on being able to see things and be motivated - was irrelevant. 



Friday, August 21, 2020

A Tester Hiring Experiment - Test with Them

For two summers in a row, for two different companies, I have been in the lovely position of being able to offer a temporary trainee position for some person for that summer. As one can imagine, when there is a true beginner position available, there are a lot of applicants. 

Unfortunately, it is really hard to make a difference between an applicant and another when what you look for is potential. Our general approximations of potential are off, and the biases we have will impact our choices. 

The first trainee that we selected for last summer came through the HR pipeline. The lucky 5 selected out of thousands to be viewed at the final stages were fascinating people to talk with. 

First of all, they all had programming experience. I particularly remember a woman with 2 years in a position in another company that my co-interviewer rejected based on "not knowing this piece trivia means she does not know anything" and a 15-year-old boy with 6 years of programming experience. While I'm delighted the promising young man got a chance, that recruitment filled my heart with hopelessness for anyone who starts later in life.  

Again, we are recruiting on potential. Obviously we want the person to contribute in the work. But we want also the person to learn, to grow, and become someone they are not yet when they start their work with us. And you can't see that potential from their past achievements when we talk about an entry level position. 

Entry level positions are like placing bets. The chances are we will never know we missed an awesome candidate. Or, chances are, we will know later in life when that person we rejected on trivia shows up as our boss. 

With my hopeless heart, I needed an experiment to bring out hope. To balance the last line of trainees all be men, I facilitated creation of another position open to everyone but primarily marketed in women's spaces. I did so well with targeting my marketing that men did not apply. Where you post matters on who you get. 

Also, I wanted to try a different criteria. Instead of selecting them based on how they write and talk about their aspirations and experiences, I refused to read their CVs to prioritize who I would talk to. I talked to every single one of them, for 15 minutes. Or rather, as I set expectations already in the invite to apply, we wouldn't talk of them. We would pair test an application because the main skill I would look for in a candidate under my supervision is learning under my supervision.

I saw the people on the video calls as we worked together on the same problem, over and over again with the candidate changing. I would write notes of how they approaches the problem and how they incorporated my guidance as we were strong-style pairing on test code. And after a lot of calls, I had a small group of people with tester kind of thinking and learning pattern that I would select from. The final one was based on luck, I put the three names in a hat and pulled one out. 

Turns out she was a 47-year old career changer. The moment where I felt like I should have given this rare opportunity to a younger woman was a great revelation on my personal built-in ageism. From acknowledging my bias, I set out to help her succeed. 

During the summer, she learned to write test code in python and include that into a continuous integration system. She explored and analyzed a feature, and got multiple things in it corrected. The tests she contributed were our choice, and in hindsight, our choice sucked. She was part of starting a larger discussion on what types of tests are worth it, and what are little value. Her coded tests didn't fail because she couldn't code or analyze a feature, but because we pointed her at a feature that we really should have thought twice on. 

The lessons I drew of this are invaluable to me:

  • Choosing a tester by testing with them is a better foundation
  • Choosing a tester by testing can happen in short sessions and overall time is better used in this activity over deciphering a CV
  • The work we allocate to someone starting as new does matter, and their success is founded on our choices
  • Diversity of our work force will never change if we expect our 15-year old summer trainees to come with 6 years of programming experience. The field evidence shows that late start does not hinder later usefulness. 
So this summer, my experiment has been around how I teach at work. I throw new people at the versatility of real project and protect their corner less. I work to make myself somewhat available on moving them forward. And I am delighted with the results I watch after two months of attending on how well they do the basic tester job: finding information, driving fixes and doing some themselves, and automating with a selection of multiple programming languages. Obviously they have more work to do on learning, but so do I, and I have been on this for 25 years.  




Saturday, August 8, 2020

Recall Heuristics for Test Design

Good exploratory testing balances our choices what to do now so that whenever we are out of time, we've done the best job testing we could in the time we were given, and are capable of having a conversation about our ideas of risks we have not assessed. To balance choices, we need to know there are choices and recently I have observed that the amount of choices some testers make is limited. A lot of what we call test design nowadays is recalling information to make informed selections. Just like they say: 

     If the only tool you know is a hammer, everything starts to look like a nail. 

We could add an exploratory testing disillusionment corollary: 

    It's not just that everything starts to look like a nail, we are only capable of noticing nails. 

The most common nail of testers that I see is the error handling cases of any functionality. This balances the idea that most common nail programmers see is the sunny day scenario of any functionality, and with the two roles working together, we already have a little better coverage over functionality in general.

To avoid the one ingredient recipe, we need awareness of all kinds of ingredients. We need to know a wide selection of options for how to document our testing from writing instructional test cases, to making freeform notes to making structural notes on individual level to making structural notes on group level to documenting tests as automation as we are doing it.  We need to know a selection of coverage perspectives. We need to know that while we are creating programs in code, they are made for people and a wide variety of people and societal disciplines from social sciences to economics to legal apply. We need to know relevant ways things have failed before, being well versed in both generally available bug folklore as well as local bug folklore, and to consider both not failing the same way, but also not allowing our past failures to limit our future potential and drive testing by risk, not fear. 

This all comes down to the moment you sit in a team meeting, and you do backlog refinement over the new functionality your team is about to work on. What are the tasks you ensure the list includes so that testing gets done? 

In that moment, what I find useful being put on the spot is recall heuristics. Something that helps me remember and explain my thoughts in a team setting. We can't make a decision in the moment, without knowing our options

I find I use three different levels of recall heuristics to explore what I need to recall my options in a moment. Each level explores at a different level of abstraction:

  • change: starting from a baseline where the code worked, a lot of times what I get to test is on a level of code commit to trunk (or about to head to trunk). 
  • story: starting from a supposingly vertical slice of a feature, a user story. In my experience though people are really bad at story-based development in teams, and this abstraction is available rarely even if it is often presented as the go-to level for agile teams. 
  • feature: starting from value collection in the hands of customers where we all can buy into the idea of enabling new functionality. 

For a story level recall heuristic, I really like what Anne-Marie Charrett has offered in her post here. Simultaneously, I am in a position of not seeing much of story-based development but backlogs around me tend to be on value items (features and capabilities) and the story format not considered essential. 

Recall on level of change

The trigger for this level of recall is a chance in code. Not a Jira ticket, but seeing lines of code change with a comment that describes the programmer intent for the change. 

Sometimes this happens in a situation of pairing, on the programmer's computer, the two of you working together on a change. 

Sometimes this happens on a pull request, someone having made a change and asking for approval to merge it to trunk. 

Sometimes this happens on seeing a pull request merged and thus available in the test environment. 

This moment of recall happens many times a day, and you thinking quickly on your feet under unknown change is a difference in fast feedback and delayed feedback.

How I recall here:

  • (I) intent: What is supposed to be different? 
  • (S) scope: How much code changed? Focused or dispersed? 
  • (F) fingerprint: Whose change, what track record?   
  • (O) on it: How do I see it work?
  • (A) around it: How do I see other potentially connected things still work?

Recall on level of feature

The trigger for this level of recall is need of test planning on a scale of feature to facilitate programmers carrying their share of testing but also to make space for testing. 

Sometimes this happens in a backlog refinement meeting, the whole team brainstorming how we would test a feature.

Sometimes this happens in a pair, coming up with ideas of what we'd want to see tested. 

Sometimes this happens alone, thinking through the work that needs doing for a new feature when the work list is formed by process implying "testing" happens on every story ticket and epic ticket level without agreeing what it specifically would mean. 

  • (L) Learning: Where can we get more information about this: documents, domain understanding, customer contacts. 
  • (A) Architecture: What building it means for us, what changes and what new comes in, what stays. 
  • (F) Functionality: What does it do and where's the value? How do we see value in monitoring?
  • (P) Parafunctional: Not just that it works, but how: usability, accessibility, security, reliability, performance...
  • (D) Data: What information gets saved temporarily, retained, and where. How do we create what we need in terms of data?
  • (E) Environment: What does it rely on? How we get to see it in growing pieces, and where?
  • (S) Stakeholders: People we hold space for. Not just users/customers but also our support, our documentation, our business management. 
  • (L) Lifecycle: Features connect to processes, in time. Not just once but many times. 
  • (I) Integrations: Other folks things we rely on.  

Recalling helps make choices as we are aware of our choices. It helps call in help in making those choices. 

Monday, August 3, 2020

An Analysis of Exploratory Testing

Open space conferences like Socrates UK Digital Summer provide a great platform for making a little progress on finding ways to teach about exploratory testing in writing. For purposes of writing, I run an ensemble testing session to compare notes of what I did in prep alone vs. where the group ends up. Putting the two together could provide useful lessons for those who did not get to join.

For these sessions, I picked up a new test target. Eviltester posted some of his testing apps and games a while back, and EPrimer ended up as my choice as it promised 
  1. Not heavy on bugs - could actually focus on testing instead of bug reporting
  2. Completely unknown domain: proper English language writing style "eprime" I had never heard of. 
  3. WebUI with beautiful IDs
At this point, I encourage you to follow the link to the app and stop reading what I say before you tried it out yourself. If you did not follow my encouragement, I suggest that after reading this, pick up another of the eviltester test targets and apply what you learned here. 
Session Charter:  Explore EPrimer focusing on two kinds of documentation as output: test automation you can run (using e.g Robot Framework) and a mindmap. Time: 1 hour plus learning time for test automation tool if you have no experience and no expert available answering your questions in the moment. 
Two sessions, two results
 
As expected, the two session provided very different results that complement one another.

Session one produced ~30 tests one can run again, spread over 7 test suites, each named on the type of collection of data it was testing and a mindmap on realization that all tests were on single function while there were multiple but covered the domain description as specification well, identifying multiple problems against specification. 

Session two produced 5 tests one can run again, all in 1 test suite where a bit of commenting out is necessary to get the tests to run later. The coverage of functions was significantly better and the session identified 2 bugs. No mindmap was created and better function coverage came from choosing to understand everything a little and not diving systematically into specification. Single created test covered more ground. 

Breakdown of activities

Whenever we are doing exploratory testing, we get to make choices of where we use our limited time based on the best information available at the time of testing. We are expected to intertwine various activities, and when learning, it may be easier to learn one activity at a time before intertwining them.

If you think back to learning to drive (while stick gear was a thing), you probably have ended up in an intersection, about to move forward and your car shutting down as intertwining your actions with the gear and pedals were not quite as they should. You slowed down, made space for each activity and got the car moving again. Exploratory testing is like that, you control the pace and those who have practiced long will be intertwining activities in a way that appears magical. 

For this testing target, we had multiple activities we needed to intertwine (learn / design / execute):
  • Quickly acquiring domain knowledge: no one knew what eprime is, and we had our choice of reading about it.
  • Acquiring functional knowledge: using the application and figuring out what it does.
  • Creating simple scripts with multiple inputs and outputs: using same test as template for data-driven helps repeat similar cases in groups. 
  • Identifying css selectors: if you wanted test automation scripts, you needed to figure out what to click and verify and how to refer to those from the scripts. 
  • Controlling scope of tests: see it yourself, see it blink with automation, repeat all, repeat only the latest. 
  • Creating an invisible or visible model: Seeing SFDPOT (Structure, Function, Data, Platform, Operations, Time) to understand coverage in selected or multiple dimensions
  • Cleaning up test automation: Improving naming and structuring to make more sense than what was created in the moment.
  • Using the application: Making space for chances to see problems beyond the immediate test
  • Identifying problems: Recognizing problems with the application. 
  • Documenting problems: Writing down problems in either test automation or otherwise.
  • Working to systematic coverage: Pick a dimension, and systematically cover it learning more on it. 
  • Reading the code: We had the code and we could read it. That could add to our understanding. 
Taking another two hours on top of the two hours on cleaning up the results. I summarized final results like this:
The app isn't completely tested, as the exercise setting biased us towards documentation. 

Examples of activities

Quickly acquiring domain knowledge. Reading the specification of eprime. Focusing on examples of eprime. Refreshing knowledge of English grammar around the verb "to be" e.g. 5 basic forms of verbs and 6 different types of verbs, or 6 categories of verbs - all things I googled for as I am writing this. While the specification tells what knowledge was probably used to create the application, there is domain knowledge beyond what people choose to write down in specification. 

Acquiring functional knowledge. Using the application. Asking questions about what is visible, particularly the concepts that are no obvious: 'What is Possible Violations?".  Seeking data demonstrating it could work. Seeking large data to demonstrate functions through serendipity. 
 
Creating simple scripts with multiple inputs and outputs. Writing test automation that allows for giving multiple input and output values as parameters. Getting into the tool and into using the tool. 

Identifying css selectors. Getting to various values with code, understanding what is there in different functions to click and check. Feeling joy systematic ID use making the work easier.  Recognizing conflicts in UI language and selector language. 

Controlling scope of tests. Moving tests to separate files. Commenting out tests that already work. Running tests one by one. 

Creating an invisible or visible model. Ensuring we see all things work once before we dig deeper in any individually. Creating a map of learning in the last minutes of the session. Writing down notes of what we are seeing as either test automation or other types of documents. 

Cleaning up test automation. Rename everything named foo at time when we knew the least. Comment out things to focus on the next thing getting done efficiently. Using domain concepts as names of collections. 

Using the application. Spending time using the application to allow for serendipity. Observing look and feel. Observing selected terminology. 

Identifying problems. Seeing things that don't work. Like visual of it that is very bare-bones. Or line breaks turning valid positives into false negatives. 

Documenting problems. Writing these down in test automation. Figuring out it we want to leave it passing (documenting production behavior) or failing (documenting bugs). Remembering issues to mention. Writing them down as notes. Writing a proper bug report. 

Working to systematic coverage. Stopping to compare models to how well those are covered. Creating a visual model. Covering everything in the specification. Covering all visible functionality. 

Reading the code. Closing the window that has the code as it gets in the way of moving between windows. Reading the code to see how concepts were implemented.  

Some Reflections

Every time I teach exploratory testing, I feel I should find ways of teaching each activity separately. There is a lot going on at the same time, and part of its effectiveness is exactly that. 

In the group, someone suggested we could split the activity so that we first only document as test automation, not caring about any of the information other than what is true right now in the application. Then we could later review it against specifications and domain knowledge. That could work. It would definitely work as one of the many mixes when exploring - change is the only constant. This split is what approvaltesting is founded on, yet I find that I see different things when I use the application and create documentation intertwined, than receiving documentation that I could review. One night in between the actions is enough for me to turn into a different person. 

The Final Deliverables

In the last minutes of one of the sessions, I cooked up a mindmap of what was in my head on the application. I had only covered a small portion, focusing on counting Discouraged words. 


The robot tests from the two sessions combined with cleanup are available at: 

Wednesday, July 29, 2020

An Exploratory Tester's Zephyr

Zephyr, in case you did not know, is a Jira Test Management extension. I dislike Jira, and I dislike Zephyr. But what I like and don't like does not change (well, immediately) the whole organization, and I play within general bounds of organizational agreements. In this case, it means an agreement that tests are documented in Zephyr - for some definition of tests. 

This post is about how I play within those bounds, enabling exploratory testing. 

What Zephyr Brings In

Zephyr as a Jira plugin enables some very rudimentary test specific concepts:
  • Ticket reuse. When the jira ticket is a test, it can be run many times, like for example for each build we test. Normal Jira tickets are more straightforward in their lifecycle.
  • Steps. For some reason people still think tests have steps with expected values. If you don't know better, you might use these. DON'T. 
  • Mapping tests to releases. You can tell what test ticket connects with a particular Jira release. It shows the structure of how testing usually progresses in relation to changes. 
  • Grouping. You can group tests inside releases into test suites. You have many reasons you might want to group things. Zephyr calls mapping and grouping cycles. 
  • Run-time checklists. You can keep track of passes and fails, things in progress. You can do it either on level of a group of tests or on an individual test. You have a whole own view to making notes while testing on a particular test case, execution view. It seems to imagine all your test needs in one place: bug reporting, steps, notes. 
What I Bring In

When I document my plans of testing, I create a few kinds of tests:
  • [Explore] <write a one line summary here>
    These tests can be for the whole application like "Gap analysis exploration - learn all the problems they don't yet know", or a particular purpose like "Release", or an area of particular interest like "Use for people with disabilities". If I can get away with it, I have only one test case titled "[Explore] Release" and I only write notes on it at time of making a release. What this assumes though is that release is something more continuously flowing rather than one final act in the end - agile as if we meant it. 
  • [Scenario] <write a one line summary here>
    These tests are for very high level splitting of stakeholder perspectives I want to hold space for. They are almost like the ones I mark [Explore] expect that they all together try to summarize remembering the most important stakeholders and their perspective in the product lifecycle. These are in the system context, regardless of what my team thinks their component delivery responsibility has been limited to.  
  • [Feature] <write a one line summary here>
    These tests I use when I have bad or non-existent documentation on what we promise the software will do. These tests all together try to summarize what features we have and try to get to remain, but as a high level checklist, not going into details of it. These are in the context of the system, but more towards the application my team is responsible for. 
I use states of these tests to indicate scope ahead of me. 

If a test is Open (just like a regular Jira ticket), it is something I know we expect to deliver by a major milestone like a marketing release all the little releases work towards, but I have not seen a version in action we could consider for the major milestone scope. It reminds me to ask if we have changed our mind on these. 

If a test is Closed, it is still alive and used. but it is something where we have delivered all the way to production some version of it and we intend to keep it alive there. 

If I can get away with one test case, that is all I would do. There are many reasons for me not to be able to get away with it: a newer colleague we need a shared checklist with, me needing a checklist and creating it here with minimal extras, or auditing process that would not be fulfilled with just that one ticket of [Explore] Release. 

The updating of test status is part of release activities for me. Someone needs to create a release in Jira, which usually happens when the previous release is out. For that release, I add at most two Cycles:
  • Pre-Release Testing
  • Release Testing
Again, if I can get away with it, I have only one: Release Testing and within in, I have only one test: [Explore] Release that I mark passed and write notes if I have something useful to say. Usually the useful thing for me to say is "release notes, including scope of changes is available here <link>". 

The way testing works for me is that I see every pull request and nothing changes outside pull requests. I test selected bits and pieces of changes, assessing risk in the moment. I also have a set of test automation that is supposed to run blue/green (pick your color for 'pass') that hunts down need of attending to some detail. And I grow the set of automation. If you need 'proof' of passing for a particular release, we could in theory get that out of version control but why would you really want that?

The Pre-Release Testing Cycle, if it exists, I fill it when I think though what happened since last release and what still needs to happen before the next one and I drag in existing tests from all three categories [Explore], [Scenario] and [Feature] to be a checklist. What this cycle contains tells about themes and features I found myself limiting to. And when a Pass on the cycle isn't sufficient documentation, I can always comment the test ticket. 

My use of Zephyr is very different to my colleagues. Perhaps also to your use? 

Tuesday, July 21, 2020

Anchoring an idea while Exploratory Testing an API

One of the things we get to test is a customer oriented API. It's particularly lovely test target for multiple reasons:
  • Read-only: It only gets data, and does not allow us to change data. Makes it simpler! 
  • Time-constrained on API level: You can tell dates as input and it does freeze time for test automation purposes. You don't have to play with concepts of today() and now(). 
  • Limited and understandable UI level edits to data: There are some things we can change from GUI that impact the API but they are fairly straightforward. 
The main reason it brought us joy for testing today is that we found a bug on it a few weeks back where particular combination returns 500 error code (Server error) where it should not, and we got to start creating some tests back then to create a nice baseline for the time that bug would be fixed.

The long awaited message of bug fix arrived today, and the first thing we'd do is pull out the tests we had automated the last round (asserts and approvals, I wrote about those earlier as we set the project up). We ran the tests, expecting to see a fail for the assert for getting 500 for that bug. The results surprised us.

We still had that test passing, but now we also had another test failing with 500. Instead of going forward with the fix, we had momentarily gone backwards. 

Not long after, we got to try again with a new version. This time it was just as we expected. Within 30 seconds of realizing the version was available, we knew that on the level we automated our tests before, those were now matching today's expectations. 

For those of you concerned on the tests not running in CI, it is about the same time to go check they are blue as we did not place these tests as ones blocking the pipeline. These tests weren't designed for the pipeline, they were designed as an entry point for exploratory testing where we could leave some of them behind for the pipeline or for other purposes. 

We quickly drafted our idea of what we would test and change today:
  • Capturing and reviewing for correctness for the combination that we previously documented as receiving the 500 response for that bug
  • Ensuring we could see latest data after the most recent changes
  • Having easily configurable control over dates and times we had not needed in our tests before
  • Making some of the tests approval files smaller in size as long as they did not lose the idea of what we were testing with them
What turned out to be the most fun thing to test was the latest data. Starting with that idea, we found multiple other ideas of what to test, including things around changing more values on the data, and things around multiple overlapping limits. We needed to remind ourselves, multiple times, that we still have not seen our starting idea in action, even if we had seem many other ideas. 

As a conclusion of today, we came to the importance of anchor, and remembering that anchor. If writing it down helps, write it down. If having a pair that keeps you honest helps, have a pair. Whatever works for you. But a lot of times, when we do some testing, we end up forgetting what was the thing we set out to do in the first place. Anchoring an idea allows us to discover while we explore, and still stay true to what we originally set out to do. 

We ended up refactoring our test code a bit to make it more flexible for the ideas we had today, and we discovered one test we wanted to keep for future. It started off with one name and concept, yet though exploring we learned that what we wanted to keep for future was different to what we wanted and needed to do today. 
Truth is, we always throw some away, and that is where I recognize learning and thinking is going on. Can keep and should keep are two different things.  

Saturday, July 18, 2020

Dealing with Rejection in Teams

Have you ever come back from a conference, full of energy with the great ideas the speakers shared, gone to your team and suggested to try something new  to only hear that it's an idea that "would not work here", "now is isn't the time for that" or that "the idea is stupid" - implying you're stupid liking that idea. Surely your team isn't rejecting you, they are rejecting ideas you got really excited on. 

And it's not just the ideas that come from outside but ideas of doing something different, like having a cup of coffee with colleagues at least once a week. I'm often even commented down on naming variables in pull requests, only sometimes to something I agree is a better name but I stop fighting so that we don't get stuck. We get rejections of our ideas all the time. 

When you learn all your ideas are rejected, you move on to only dealing with ideas on a personal level and obeying ideas of powers to be. You take what others are offering, and within the box of them not seeing you do things, you do what you can do right inside that little box. 

I'm someone who counts. I count how many times my ideas get brushed down. I count how many times I brush down other people's ideas, and who are the people who reject other's ideas the most. I have been rejected a lot, yet I still keep trying because when I need to give up, I need to give up on the industry. 
Here are some ideas of how I deal with that rejection that we get in the teams:
  • repetion. Ask once, ask again. Kids have no shame in this, but adults get punished fairly soon. So if you repeat the same ask, be careful on how much annoyance you will add. A weekly repetition is probably better than repeating it many times in a row. But also, people end up liking things they've heard of many times better than those they hear 1st time. 
  • finding right time. Ask when they are more likely to say yes. Did you know that asking to get out of jail as a convict, you are more likely to get out if the time of your case happens just after lunch? Asking after completing a major milestone is likely to give different results than asking while you are in the middle of that worst crunch.
  • prioritize what you ask for. You'd like to see 10 changes,  so select one. It could be the one that means you the most. It could be the one they are most likely to accept. 
  • finding right words. It's not what you say, it's how you say it. Sometimes it is true, sometimes it's an excuse. Try to find a way of explaining it that makes the one listening to your proposal understand it. It could be logic. It could be financial benefits. It could be an appeal to your personal happiness ('yay, that worked for me on ensemble (mob) programming'). 
  • finding right messenger. Sometimes you will never be heard, so send someone else. I did this to get no estimates started a few jobs ago and too many times I could quote since. I like to say: "best ideas win if we care about work over credit" and feel sad how much of my credit I need to move away before reclaiming it through promoting results. 
  • finding right medium. Some people react better to verbal while others react better to written requests. Some people forget all things verbal and are only safe being asked in writing. Use one first, another later. 
  • convincing a subgroup. If you have a few people suggesting something, some folks here groups better than individuals. You may need to get buy in from people who are not making the decision to get through to those making the decision. 
  • make it sound temporary. Call it an experiment. Agree on a time you will do it, and when you give up, even when you are really thinking you should keep doing it. This worked great to get to having an agile team with no product owner and results that were improved significantly. 
  • confronting the rejection pattern. Tell people you've observed that your suggestions are rejected. Keep track of what ideas they did reject, and suggest a rule that  they must experiment with at least one every six months, or one out of ten you produce. This one is DANGER! 
  • visualizing of whose ideas we went with. Draw on a whiteboard a tally of names and label that as ideas proposed/implemented. See if seeing the pattern helps people realize they could work with it.
  • showing it works without the others. Just do it yourself. A lot of tech ideas only get traction if you show up with a prototype that works. You could also find others in community instead of your organization, and work like you wanted on your free time on learning projects. 
  • build a track record. Get some of your ideas through. Show you can try small and are willing to step away if they fail. Building that confidence may help them hear you better. 
  • create a patience raindance. Create a little routine that helps you through all this rejection so that you still can try again. My patience raindance routine is tweeting. It's a mysterious call for powers to be on granting me patience to try again until I succeed in getting to a happy place. 
  • amplify ideas of others. Don't be the person who shoots other people's ideas down. Try to approach them with the "let's try it out" attitude even when you hate it. You'd like them to do it for you, do it for them. 
Finally, dealing with rejection is a skill, and your need of developing that skill depends on your status in the organization and team. It is likely you will deal with this more if you are a tester and if you are a woman and even more if you are not white. Cope with rejection, but never ever give up. Only through the No you can get to a Yes. Protect yourself on the way there with versatile strategies of knowing when to give up and how to get to a yes. 

This blog post is brought to you by a twitter pile on of helpful agilist who seem to think getting rid of or changing contents of a daily meeting is something you just change in a team. They may come from a position of privilege where when they point out a change people just do it, but I find myself more often not in that situation and thus employing all the ways I have to be tenacious and get there anyway. I've been there too, and I recognize the difference. 

Friday, July 17, 2020

Where in Testing is Exploratory Testing?

When people start learning about testing, and agile testing in particular, they quickly get to a model of testing quadrants. Testing quadrants, popularized by Janet Gregory and Lisa Crispin with their Agile Testing book, place a collection of testing words in four quadrants. When you go look, you can find "exploratory testing" in the top right corner, meaning it is considered Business Facing Product Critique. 

As the term was coined 35 years ago to express a specific approach to testing, making it a technique in one corner of the quadrants was not the intent. It expressed a style of testing unfamiliar to the majority, that was observable in Silicon Valley product companies, a skilled multidisciplinary testing under product development constraints. 

The world moved on, and testing certifications had hard time placing a whole approach of testing into their views of the world. With introduction of ways to manage this style of testing (session- and thread-based exploratory testing), those seeking to place it in the technique box combined the management style and defined their idea of using only limited time - separately defined sessions - on this way of testing and everything it was born to be different from remained in the center. 

That means that in the modern world, exploratory testing is two things:
  1. a technique to fill gaps that all other testing leaves
  2. an approach that encapsulates all other testing 
As a technique, you can put it in the corner of quadrants. As a technique, you can put it on top of your test automation pyramid and make jokes about the pyramid turning into an ice cream cone with too much of exploratory testing on top. But as an approach, it exist for every quadrant, and for every layer. 

Due to the great confusion, questions about the other testing I do on top of exploratory testing are quite common. 

This response today inspired me to think about this a little more. 
Exploratory fills the gaps.
But for me, it does not fill the gaps. It is the frame in which all other testing exists. It is what encourages short loops to learn, challenges the limits of what I already have learned, makes me pay attention to what is real, and creates a sharp focus on opportunity cost. 

I scribbled an image on paper, that I recreated for purposes of this blog post. If all these shapes here are the other kinds of testing mentioned: feature testing, regression testing and non-functional testing, what is the shape of exploratory testing? 
The shape of exploratory testing is that it fills the gaps. But it also defines the shape of all the other tests. It's borders are by design fuzzy. We are both right: for me it is the frame from which all the other testing exists, even when it fills gaps. 

There is such thing as non-exploratory testing. It's the one where shape of other tests stay in place and are not actively challenged, and where particular artifacts are important over considering their value and opportunity cost. 

Where I worked, we had two teams doing great at testing. Both teams explored and left behind test automation as documentation. When asked what percentage they automated, their responses were very different. One automated 100%, and it was possible by not having as many ideas of what testing could be. The other automated 10%. Yet they had just as much automation as the first, but often found problems outside the automation box.  Easiest way to get get to 100% is by limiting your ideas of what testing could be. 

Seeing there's plenty of space in between the shapes and plenty of work in defining the shapes can be a make or break for great testing. 


Tuesday, July 14, 2020

Starter Project Overload - just tell me the steps

Today, I ended up spending a few hours setting up a javascript - jest - puppeteer environment, to enable comparison of the tests we do on a third framework (we did this before with record-playback within Datadog and Robot Framework). The two others served as stepping stones to understanding what you can and can't test and what maintenance of the scripts feel like, and continuing with either of the two in our team would mean that these tests belong to the tester alone. So Javascript is a definite priority for sharing with the rest of the team. 

Googling around did not make getting set up easy and fluent. There is too much instruction, unsurprisingly. The question from the intern is well warranted: "How the hell are you supposed to know which of these articles are worthwhile sources?"

My blog does not match the criteria of worthwhile sources I explained them: seek material as close to the original open source project as possible. But I thought I'd still write down for fun and benefit how simple it turned out to be.

To get the jest - puppeteer environment running to a point where you can start writing your own tests, here is what to do. 
  1. Install yarn (=> google "install yarn" and install as instructed)
    Yarn is a package manager. It pulls down packages someone else made available. 
    NPM is another package manager. You could use that too, the commands are just different then. 
    And yes, package manager is something you need on your development machine. The packages it brings down to your project are different in the sense that they are dependencies for what you build. The steps to install dependencies with package manager are setup, and you don't want all those files to be same for every single programming project you do on your computer. 

  2. Create a folder for your programming project to reside in and open the folder in VS Code
    This step is for familiarity and control. Having nothing and being in the empty folder when working with a new tool gives sense of control. 

  3. Run Terminal (from Terminal | New Terminal in VSCode) and install dependencies
    You will want to run 
    yarn add jest
    yarn add puppeteer
    yarn add jest-puppeteer
    that will create you a bunch of files under your empty folder under node_modules folder.
     
  4. Add jest to your path so that you can run it from command line
    yarn global add jest
  5. Initialize jest to create the configuration file
    jest --init
    This creates jest.config.js file. 

  6. Test that jest alone works for you

    Create a file sum.js
    function sum(ab) {
        return a + b;
      }
      module.exports = sum;

    Create a file sum.test.js
    const sum = require('./sum');

    test('adds 1 + 2 to equal 3', () => {
      expect(sum(12)).toBe(3);
    });

    Straight out of Jest Getting Started! 

    Run the tests
    jest
  7. Add jest-puppeteer to jest.config.js file
    "preset": "jest-puppeteer"
    While at it, comment out 
    testEnvironment: "node",
  8. This step was the one that I was hunting for an hour! For everyone's benefit, I could have used the energy I use on this post to help correct the original jest documentation but I ended up adding to newbie confusion. 
     
  9. Test that jest puppeteer works for you

    create a file google.spec.js
    (NOTE! spec, not test or it worn't work - the second thing that was causing me pain in this flow) 

    describe('Google', () => {
        beforeAll(async () => {
          await page.goto('https://google.com');
        });
      
        it('should be titled "Google"'async () => {
          await expect(page.title()).resolves.toMatch('Google');
        });
      });

  10. Replace all tests/specs with whatever you want to work on. 
Next up is putting these in a container without using a container project starter. But that must be another blog post for my future reference. 
 

Thursday, July 2, 2020

Never tested an API? - A Python Primer from My Summer Trainee

With first of our release, I taught the most straightforward way I could to test an API for my summer trainee. I gave them a URL (explaining what a URL is), showed different part of it indicated where you connected and what you were asking for and ended up leaving office for four hours letting them test for the latest changes just as other people in the team wanted to get out of office for their summer vacation. They did great with just that in my absence, even if they felt the responsibility of releasing was weighing on them. 

No tools. No postman. Just a browser and an address. Kind of like this: http://api.zippopotam.us/us/90210


The API we were testing returned a lot more values. We were testing 20000 items as the built-in limit for that particular release, and it was clear that the approach to determine correctness was sampling. 

Two weeks later, today we returned to that API, with the idea that it was time to do something more than just looking at results in the browser. 

Python, in the interpreter

We started off by opening a command line, and starting python. 


As we were typing in import requests, I explained that we're taking a library into use. Similarly I explained print(requests.get("http://api.zippopotam.us/us/90201")), forgetting the closing parenthesis at first and adding it on a line after. 

With the 200 response, I explained the idea of this code meaning it was ok, but we'd need more to see the message we had earlier seen in a browser, and that while we could also use this for testing, we'd rather move to writing our code to a file in an IDE. 

Python like a script, in Pycharm

As we opened Pycharm and created a .py file to write things in, the very first lines were exactly the same ones we had been running from command line. We created two files. First requirements.txt in which we only wrote requests and second file ended up with name experiments.py. As the two lines were in, Pycharm suggested installing what requirements.txt defined and we ensured it was still running just the same. At first we found the Run menu in IDE, later the little green play buttons started to seem more appealing as well as the keyboard shortcut for doing this one often. 

We replaced the print with a variable that could keep our response to explore it further
response = requests.get("http://api.zippopotam.us/us/90210")
typing in response. and ctrl+space, we could see options of what to do with it and settled with 
print(response.text)
At this point, we could see the same text we had seen before in browser, visually verify it just as much as with the browser and were ready to move on. 

Next we started working on the different pieces of the URL, as we wanted to test same things in different environments, and our API had a few more options than this one I use for educational purposes here. 

We pulled out the address into a variable, and the rest of it into another, and concatenated them together. for the call. 
import requests
address = "http://api.zippopotam.us/"
rest_of_it ="us/90210"
whole_thing = address + rest_of_it
response = requests.get(whole_thing)
print(response.text)
The API we were playing with had a lot more pieces. With environments, names, id's, dates, limits and their suffixes in the call we had a few more moving parts to pull out with the very same pattern. 

As we were now able to run this for one set of values, our next step was to see it run for another set of values. On our API, we're working on a data-specific bug that ends up giving us a different status code of 500, we wanted to move for the idea of seeing that here. 

Making the status code visible with 
print(response.status_code)
we started our work to have calls of the whole_thing where it wasn't what we started with but had multiple options. 
#rest_of_it ="us/90210"
rest_of_it = "fi/00780"
Every option we would try got documented, but the state of changing one into a comment and another into the one we would was not what we'd settle for. 

We wanted two things: 
  • a method that would take in the parts and form the whole_thing for us
  • a way of saving the results of calls 
We started with keeping a part of the results introducing pytest writing that into requirements.txt as second line. 
requests
pytest
Again we clicked an ok adding what our environment was missing as Pycharm pinged us on that, and saved the response code codifying it into an assert. We remembered to try other values to see it fail to trust it in the first place. 
assert response.status_code == 200
Us still wanting the two things above, I interrupted our script creation to move us a step in a different direction. 

Python like a Class and Methods, in Pycharm

We googled for "pytest class example" under my instructions, and after not liking the first glance of the first hits, we ended up on a page: https://code-maven.com/slides/python-programming/pytest-class

We copied the example as experiments_too.py file contents on our IDE. 

We hit a mutual momentary hiccup, to figure out three things: 
  1. We needed to set pytest as our default test runner from File | Settings | Tools | Python integrated tools | Default test runner. 
  2. The file must have Test in name for it to be recognized as tests
  3. We could run a single test from the green play button next to it
The original example to illustrate setup and teardown had a little bit too much noise, so we cleaned that up before starting to move our script in to the structure.
class TestClass():
def setup_class(self):
pass

def teardown_class(self):
pass

def setup_method(self):
pass

def teardown_method(self):
pass

def test_one(self):
assert True
We moved everything from the script we had created inside test_one() 
def test_one(self):
import requests
address = "http://api.zippopotam.us/"
# rest_of_it ="us/90210"
rest_of_it = "fi/00780"
whole_thing = address + rest_of_it
response = requests.get(whole_thing)
print(response.text)
assert response.status_code == 200
And we moved the import from inside the test to beginning of the file to have it available for what we expected to be multiple tests. With every step, we run the tests to see they were still passing. 

Next, I asked the trainee to add a line right after def test_one(self) that would be like we imagined what we'd like to call to get our full address. We ended up with
define_address("foo", "bar")
representing us giving two pieces of text that would end up forming the changing parts of the address. 

A little red bulb emerged on the IDE next to our unimplemented method (interjecting TDD here!) and we selected Define function from the little menu of options on the light bulb. IDE created us a method frame.
def define_address(param, param1):
pass
We had already been through the idea of Refactor | Rename coming up with even worse names and following the "let's rename every time we know a name that is better than what we have now" principle. I wouldn't allow just typing in a new name, but always go through Refactor to teach the discipline that would be benefiting from the tooling. Similarly, I would advice against typing whole words but allowing IDE to complete what it can. 

We moved the piece of concatenating two parts together into the method (ours had a little more parts than the example). 
def define_address(part1, part2):
whole_thing = part1 + part2
return whole_thing
and were left with a test case where we had to call the method with relevant parts of the address
def test_one(self):
# rest_of_it ="us/90210"
response = requests.get(define_address("http://api.zippopotam.us/", "fi/00780"))
print(response.text)
assert response.status_code == 200
The second test we'd want as comment in the first became obvious, and we created a second test. 
def test_two(self):
response = requests.get(define_address("http://api.zippopotam.us/", "us/90210"))
assert response.status_code == 200
Verifying that response.text

Now that we had established the idea of test cases in a test class and structure of a class over writing just a script with a hint of TDD, we moved our attention to saving results of the calls we were making. Seeing "200 success" isn't quite what we'd look for. 

In the final step of the day, we introduced approvaltests into requirements.txt file.
approvaltests
pytest-approvaltests
We edited two line of our file, adding
from approvaltests.approvals import verify
and changing print to verify
verify(response.text)
We run the tests from terminal once to see them fail (as we saw them be ignored without this step on the usual run) 
pytest --approvaltests-use-reporter='PythonNative' TestClass.py
We saw a file TestClass.test_one.received.txt emerge in our files, and after visually verifying it captured what we had seen printed before, we renamed the file as TestClass.test_one.approved.txt. We run the tests again from the IDE to now see them pass, edited the approved-file to see it fail and corrected it back to verifying our results match. 

As finalization of the day, we added verification on our second test, again visually verifying and keeping the approved file around. 
def test_one(self):
response = requests.get(define_address("http://api.zippopotam.us/", "fi/00780"))
verify(response.text)
assert response.status_code == 200
And finally, we defined approvaltests_config.json file to include information where the files approvaltests create should go
{
"subdirectory": "approved_files"
}
These steps give us what we could do in a browser, and allow us to explore. They also help us save results for future with minimal effort, and introduce a baseline from which we can reuse things we've created. 

Looking forward to see what our testing takes us to next with the trainee. 

Wednesday, July 1, 2020

Learning about Learning

As an exploratory tester, I've come to appreciate that a core of my skills is that I have been learning about learning, and having practiced mostly learning about products, technology, organizations, businesses and people for a quarter of a decade, I have somewhat of a hang of it.

Having a hang of it shows particularly when I change organizations, like I did 2 months ago. Even if I say so myself, I've taken in the new organization at a good pace and have been contributing since beginning, but to my expected level of exceeding expectations starting from the second month. 

Even though I still consider testing (and software productivity) more professional core, I find that the stuff I am learning about learning applies just as much to other roles. Today I took a moment to deliver a 30-minute broadcast inside my organization, talking just about learning. Since most of you could not join an internal session, I decided on a blog. 

Foundation, the Math

Imagine you were awesome. Your results are great. You know how to get the job done. Every day when you come to work, you deliver steadily. Sounds great? 

Many of us are awesome and deliver steadily. We are as productive today as we are in a year. Solid delivery. 

But learning changes the game. 

Imagine you and your colleague are equally awesome. You both deliver steadily today. But your colleague, unlike you, takes time away from every single working day to improve their results. They find a way to become 1% better every week, shaving off 4 minutes of time from completing something of significance. In a year, you're still awesome like you were before. But your colleague is 1.7 times their past self due to learning. 

1% a week may sound a lot, or a little, but the learning accumulates. If we learned in ways that transform out results 1% each day, a year gives us 37.8 times our past selves. 
This sums our working days into two activities: We are either learning or contributing. Both are valuable. We could use most of our office hours to achieve that 1% improvement every day to match our past selves in a year. The investment to learning is worthwhile.

From Learning Alone to Learning Together

Now that we've established the idea that learning is worthwhile investment, we can discuss our options for using that investment. Learning does not happen only while we take special learning time to show up on a course, but most of it is on the job. Volunteering to do that cloud configuration you've never done before - now you have. Volunteering to take a first effort at the UX design even though you're not a UX designer - now that you are in control of the tasks and your learning, those with more experience can help you learn. Learning is a deliberate action. 

The usual way we work is solo. We bring our best and worst into the outcome we're producing, and a traditional way of approaching this is that others will join after you in giving you feedback of things you may have missed. 
Every comment to a pull request helps you address something you missed now, and learn for later cycles. Every bug someone reports after you both internally and externally, does the same. Every time a new requirement emerges because the application serves as someone else's external imagination, you learn about how you could see things coming and are able to make informed rather than accidental choices. 

With the traditional solo - handoff style, every one of us needs to learn just enough about the work to be able to contribute our bit. If we don't know much about the work, that limits our contribution to what we know of. 

Imagine you were rather learning through pairing. Building the understanding of the task together. Not filtering the feedback based on what makes sense to ask of you as you already implemented it in one way. Instead of getting the best out of you into the work you're doing, you get the best out of both of you. 
Ensemble programming is bringing a pair to more people, a whole team and seeing the curve positively flatten as everyone is learning and contributing, provided we first learn how to listen and to work well together. 

From individuals to seeing the system

Learning on its own is a little abstract. What is it that we are learning about? 

What I talked about today, is that we're learning everything we can to optimize the meaningful outcomes from software development. It might be learning a keyboard shortcut to save time in completing an action (microlearning). It might be learning to innovate how collaboration works in our organization. And to frame that in software development, understanding it as a process there smart people transform ideas into code without being alone with all this responsibility helps frame it. 

Nothing changes for the users unless we change the code. 

If we know the right idea to change the code without people other than developers making the change, we could do just that. But we understand that fine-tuning ideas is where the rest of the organization comes to play, and that software does not exist in a vacuum without the services around it. 

Some of those percents of betterment come from stopping at looking at ourselves and starting to look at the system of people that co-creates the value. 

Learning Never Stops

The final piece I discussed today was about the idea of a Senior vs. Junior. It's not that the first knows more than the latter in some basic absolute scale. Knowing something is multidimensional, and even those of us who are seniors don't know everything. Partially this comes from the fact that there is already too much to know for a person, but also from the fact that more to know emerges every day. 

Just like a senior takes on work they need to figure out doing it, so does the junior. The complexity of the tasks expected to be figured out is very different, but one of the powers of great seniors is that we can accelerate the learning of the juniors. We don't have to put them through our struggles, they can find a new innovative struggle even when the latest of how we enable them is in place. 
Even if a senior knows more things, there are still things they can learn from the junior if they listen and pay attention.  

Ideas to take this further

As part of my in-company broadcast series on things I want to talk on allowing people to join me and have a conversation, today's conversation part was particularly successful. My theme today was ROI (Return on Investment) of Learning, and three themes stood out from the comments: 
  • Unlearning to make space for new learning - can take double the effort and requires listening to new people giving hints on things you may need to act on
  • New to industry or new to an organization - no need of deliberately looking for things to learn, the work already stretches you. 
  • Microlearning - more examples of the little stretches, more stories of things we didn't know but learned would help us a long way. 
There's a whole book I'm writing on this in the context of Exploratory Testing. I'm always open for a good conversation on this and prefer video call over wall of text, wall of text in public over private, and twitter-size over a wall of text. 

Sunday, June 14, 2020

Automation First Microheuristic

Developers announce a new feature is available in the build and could use a second pair of eyes. What is the first thing to do? Changing companies made me realize I have a heuristic on deciding when I automate test cases as part of exploratory testing. 

Both automating and not automating end up bringing in that second pair of eyes, that seeking of understanding the feature and how it shows in the relevant flows. The first level of making the choice if you start with automating is if you are capable of automating. It makes the choice available on an individual level, and only after that it can be a choice. 

When that choice is available, these things could impact choosing Automation First. 
  • Belief that change in the basic flow matters beyond anything else you imagine wrong with it
    • When automating, you will visually and programmatically verify the basic flow as you are building it. Building it to a good reliable level takes longer than just looking at it but then remains around to see if changes in software change the status of it. 
  • Availability of quality dimensions (reliability, environment coverage) through automation
    • If your application domain's type issues are related to timing of use or multitudes of environments where one works while others may not. automating first gives you a wider scope than doing it manually ever could. 
  • Effort difference isn't delaying feedback. 
    • With an existing framework and pipeline, extending it is an effort to consider. Without them, having to set things up can easily become the reason why automating takes so long it makes sense to always first provide feedback without it to ensure it can work.
  • Brokenness of application
    • Humans work around broken / half-baked features whereas writing automation against it may be significantly harder. 
I was thinking of this as I realized that the automated tests on my current system see very few problems. There is no relevant environmental difference, like with my previous job. Automation works mostly in the change dimension, unlike my previous job. 

Going into the moment of making this choice, I find I still go back to my one big heuristic that guides it all: Never be bored. First or Second does not matter as much as the idea that keeping things varied helps keep me away from boredom. Documenting with automation makes sense to avoid that boredom in the long run. 

Saturday, June 13, 2020

Training an Exploratory Tester from the Ground Up

This summer gives me the perfect possibility - a summer intern with experience of work life outside software and I get to train them into being a proper Exploratory Tester. 

Instead of making a plan of how to do things, I do things from a vision, and adapt as I learn about what the product team needs (today) and what comes easy for trainee trusted into my guidance. 

Currently my vision is that by end of the summer, the trainee will:
  • Know how to work effectively in scope of a single team as tester inside that team
  • Understand the core a tester would work from and regularly step away from that core to developer and product owner territory 
  • Know how to see versatile issues and prioritize what issues make sense to report, as each report creates a response in the team
  • Know that best bug reports are code but it's ok to learn skills one by one to get to that level of reporting ability - being available is second best thing 
  • Understand how change impacts testing and guide testing by actual change in code bases in combination of constraints communicated for that change
  • Write test automation for WebUI in Jest + Puppeteer and Robot Framework and take part in team choice of going with one or the other
  • Operate APIs for controlling data creation and API-based verifications using Java, Python and JavaScript.
  • Understand how their testing and test automation sits in the context of environments it runs in: Jenkins, Docker and the environment the app runs in: Docker, Kubernetes and CI-Staging-Prod for complex set of integrated pieces
  • Communicate clearly the status of their testing and advocate for important fixes to support 'zero bugs on product backlog' goal in the team
  • Control their own balance of time to learning vs. contributing that matches their personal style to not require task management but leading the testing they do on their own
  • Have connections outside the company in the community to solve problems in testing that are hard to figure out internally
We got started this week, are are one week into the experience. So far they have:
  • Reported multiple issues they recognized are mostly usability and language. I jumped on the problems with functionality and reported them, demoing those enforced the idea that they are seeing only particular categories now. 
  • Navigated command line, filesystem, Git, and IDE in paired setting and shown they pick things up from examples they experience, repeating similar moves a day later from learning the concepts. 
  • Skipped reporting for a language bug and fixed it with PR instead. 
  • Covered release testing with a provided one-liner checklist for the team's first release. 
  • Provided observations on their mentors (mine) models of how I train them, leading me to an insight that I both work hard to navigate on higher level (telling what they should get done, and only after digging into exactly how to do it if they already don't know that) and respond questions with questions to reinforce they already know some of the stuff.
  • Taken selective courses from Test Automation University on keywords they pick up as I explain, as well as reading tool-specific examples and guidelines. 
  • Explained to me how they currently model unit - service - UI tests and mixed language set the team has. 
  • Presented a plan of what they will focus on achieving next week with Jest-Puppeteer 1st case with our application. 
After the week, I'm particularly happy to see the idea of self-management and *you leading your own work but radiating intent* is catching up. Them recognizing they can't see all types of bugs yet is promising as is their approach to learning. 

Every step, I prepare them for the world where I won't be there to guide them but they know how to pull in help when they need it - inside the company and outside. 

Saturday, May 23, 2020

Five Years of Mob Testing, Hello to Ensemble Testing

With my love of reflection cycles and writing about it, I come back to a topic I have cared a great deal for in the last five years: Mob Testing.

Mob Testing is this idea that instead of doing our testing solo, or paired, we could bring together a group of people for any testing activities using a specific mechanism that keeps everyone engaged. The specific mechanism of strong-style navigation insists that the work is not driven by the person at the keyboard, but someone hands-off keyboard using their words enabling everyone to be part of the activity.

From Mob Programming to Mob Testing

In 2014, I was organizing Tampere Goes Agile conference and invited a keynote speaker from the USA with this crazy idea of whole team programming his team called Mob Programming. I remember sitting in the room listening to Woody Zuill speak, and thinking the idea was just too insane and it would never work. The reaction I had forced a usual reaction: I have to try this, as it clearly was not something I could reason with.

By August 2015, I had tried mob programming with my team where I was the only tester in the whole organization, and was telling myself I did it to experience it, that I did not particularly enjoy it, and that it was all for the others. True to my style, I gave an interview to Valerie Silverthorne, introduced through Lisa Crispin and said: "I'm not certain if I enjoy this style of working in the long term."

September 2015 saw me moving my experimenting with the approach away from my workplace into the community. In September, I run a session on Mob Testing on CITCON open space conference in Helsinki, Finland. A week later, I run another session on Mob Testing at Testival open space conference in Split, Croatia. A week later, in Jyväskylä, Finland. By October 22nd, I had established what I called Mob Testing as I was using it on my commercial course as part of TinyTestBash in Brighton, UK.

I was hooked on Mob Testing, not necessarily as a way of doing testing, but as a way of seeing how other people do testing, for learning and teaching. Something with as much implicit knowledge and assumptions, doing the work together gave me an avenue to learn how others thought while they were testing, what tools they were using and what mechanisms they were relying on. As a teacher, it allowed me to see if a model I taught was something the group could apply. But more than teaching, it created groups that learned together, and I learned with them.

I found Mob Testing at a time when I felt alone as a tester, in a group of programmers. Later as I changed jobs and was no longer the only one of my kind, Mob Testing was my way of connecting with the community beyond chitchat of conceptual talk and definition wars. While I run some trainings specifically on Mob Testing, I was mostly using it to teach other things testing: exploratory testing (incl. an inkling to documenting as automation), and specific courses on automating tests.

Mob Testing was something I was excited about so that I would travel to talk about to Philadelphia, USA as well as Sydney, Australia, and a lot of different places between those. November 2017 I took my Mob Testing course to Potsdam, Germany for Agile Testing Days. I remember this group as a particularly special one, as it had Lisi Hocke as participant, and from learning what I had learned, she has taken Mob Testing further than I could have imagined. We both have our day jobs in our organizations, and training, speaking and sharing is a hobby more than work.

A year ago, I learned that Joep Schuurkes and Elizabeth Zagroba were running Mob Testing sessions at their work place, and was delighted to listen to them speak of their lessons on how it turned out to be much more of learning than contributing.

We've seen the community of Mob Programming as well as Mob Testing grow, and I love noticing how many different organizations apply this. Meeting a group I talk to about anything testing, it is more of a rule that they mention that somehow them trying out this crazy thing is linked back to me sharing my experiences. Community is powerful.

Personally, I like to think of Mob Testing as a mechanism to give me two things:
  1. Learning about testing
  2. Gateway to mob programming 
I work to break teams of testers and grow appreciation of true collaboration where developers and testers work so closely that it gets easy renaming everyone developers.

Over the years, I wrote a few good pieces on this to get people started:


With a heavy heart, I have listened to parts of the community so often silenced on the idea that mob programming and testing as terms are anxiety inducing, and I agree. They are great terms to specifically find this particular style of programming or testing, but need replacing. I was working between two options: group programming/testing and ensemble programming/testing. For recognizability, I go for the latter. I can't take out all the material I have already created with the old label, but will work to have new materials with the new label. Because I care for the people who care about stuff like this.