Tuesday, October 12, 2021

Three Stories Leading Into Exploratory Testing

End of September, I volunteered on a small lunch-time panel on exploratory testing at the conference. I sat down for a conversation and had no idea it would be such a significant one for my understanding. The panel was titled "To Explore or Follow the Map" and I entered the session with concerns on the framing. After all, I explore with a map and follow the map while exploring. 

Dorota, our session facilitator, opened the session inviting stories like one she was about to share on first experiences to exploratory testing. 

Paraphrasing Dorota from memory, she shared a story of how her first testing experience in the industry was on a military project where the project practice included requirements analysis and writing and executing test cases to do the best possible testing she could. One day the test leader invited all the testers to a half-a-day workshop where they would do something different. The advice was to forget the test cases and explore to find new information. And they did. The experience was eye-opening to all the things the thorough test case writing was making them miss. 

I listened to Dorota's recount and recognized she was talking of exactly the expectations I am trying to untangle in my current organization. Designing test cases creates a lovely requirement to test linking but misses all too much of the issues we would expect to find before the software reaches our customers. 

Next up was Adam, who shared a story of his first job at testing. His manager / tutor introduced him to the work expected from him giving him an excel with test cases, an a column in which to mark the pass/fail results. Paraphrasing his experience from memory, he shared that after he finished the list, the next step was to start over from the beginning. The enlightenment came with a conference where he met an exploratory testing advocate and realized there were options to this. 

My story was quite different. When I first started as a tester, I was given test cases, but also a budget of time to do whatever I wanted with the application that I would consider taught me to understand the application and its problems better. The test cases gave some kind of structure of talking about progress in regards to them, and I could also log my hours on whatever I was doing outside the test cases without very rigid boundaries between the activities. The time budget and expectations was set for testing activity as a whole, and I could expect a regular assessment of my results by the customer organization's more seasoned testers. The mechanism worked so that for a new person, first "QA of testing" was feedback, and latter ones had financial penalty if I was missing information they expected me to reasonably find with the mix of freedom and test cases to start with. 

While I was given space for better, I did not do better. No one supported me the way I nowadays aspire to support new joiners. Either I knew what I was doing or a minor penalty on invoicing was ahead, I would still be paid for all of my hours. I never knew anything but exploratory testing, and the stories of injecting it into organizations as Friday afternoon sessions or rebellious use of test cases to stretch from have always been a little foreign to me. 

What the three stories have in common is that exploratory testing is part of these pivotal moments that make us love the testing work and do well with results. My pivotal moment came from my second job where I was handed a specification, not test cases and I had to turn my brain on, and I've been on the path of extraordinary agency and learning since. 

Also, these stories illustrate how important the managers / tutors are on setting people up on a good path. Given requirements to test cases, you simplify the work to miss the results. Given test cases, you do work better left for computers. Given time without support you do what you can, but support is what turns your usefulness around. 

Wednesday, September 29, 2021

The Six Things Facilitator Can Do To Improve Ensembling Session

Today as I shared on Ensemble Testing at EuroSTAR around lunch, one of the questions lead us to discuss specifically what the facilitator can do to make ensembling better. Watching people fail is the back row would make me squirm uncomfortably, so what can I do? 

These things have been very useful to me. 

1. Support Moving Forward with Questions

You see the driver on the keyboard not moving, and the navigator off keyboard unable to decide what to do. Ask questions! For example, you could ask the navigator "What would you like to do next?". A more general rule is to try to talk in questions as a facilitator. Think about what you want the ensemble to do so that they would work well and have a great experience, and frame your comment as a question that guides them towards that great experience. 

2. Call out a Thing They Do 

You see them doing something that they don't realize they do, but you realize because you've seen others do the same but struggle. Call out the pattern. For example, an ensemble making good notes about their testing with a feature - variable - data -structure modeling that you have learned to appreciate, name it in the moment. Giving something a label makes it something that is a little easier to retain. 

Don't overdo this during the session, you can also save examples to point out in the retro. Call it out if it helps in the moment to retain the thing and adds to vocabulary they use to successfully communicate. 

3. Step in to navigate

I use this one on teaching ensembles a lot, but also have found it useful on difficult work. For example, I might say: "Let's pause the rotation and I'll navigate a bit." This frees the current navigator who steps back to continue with timer as soon as I step out. 

I use the same pattern with a single expert - asking them to navigate when the struggle of the others in the ensemble is no longer benefit for their learning. That is, I only step in to show if I know, but I can also ask someone else, or even a volunteer to step in a while. 

4. Stop the Ensemble to Mini-retro

Make space for them to fix themselves. A lot of new ensembles need that. Well, a lot of older ensembles still need someone pointing out they should have a conversation. I once watched Woody Zuill do just that - point out a dynamic that the team needed to have a conversation on. 

Some of my best facilitation tricks are on calling out a retro after just a few rotations. People somehow need a moment where they agree on how to fix their work style to fix the work style. The facilitator can create those spaces. 

5. Set a Constraint

In one of the first ensembles I ever facilitated, I saw my co-facilitator use this to make me the expert of the group and stepping in to navigate. With a twist though - I had narrow rules on the type of work I was supposed to do. The work was exploratory testing, and the new group struggled with note taking. The constraint applied on me was to only improve notes - structure and content of what we had already learned. 

I have used this technique since, and it works great but different groups need different constraints. 

Helping the ensemble figure out what is the scope of the task they are on now is setting these constraints. Thinking in terms of what is included, how to add to what is included only with "yes, and..." rule and parking ideas for future help an ensemble work. 

6. Visual Parking Lot

Create a space, in the documentation or on a whiteboard - to make notes of things you leave for later. People generate great ideas while the work is ongoing and they may forget them by the time we seek for next work to do. Give them a space / mechanism to park those somewhere as they emerge, and call reflection on structuring the parking lot occasionally. 

Saturday, September 25, 2021

Hiring manual and automation testers

 In a meeting about hiring a new tester, a manager asks me: 

Are we looking for a manual or automation tester? 

In my head, the time stops and I have all the time in the world to wonder about the question. I'm neither. Nor is none of my immediate team colleagues. Yet look into the next team and they have one manual one automation tester. No wonder the manager asks this. We've moved from this to the next level. We're neither. We're both. Preferably, we're something different: contemporary exploratory testers. 

In the actual conversation, I don't remember the exact words I use to explain this, but I remember the manager's reaction: "that makes sense". 

We are looking for someone who is *learning* and does not box testing into *manual* and *automation* but builds from a strong foundation of understanding testing but does not stop at the sight of code. 

We want a tester who, when changing the hardware setups and network configurations, also changes the setups in the test automation repos and verifies what ever test we have automated will still run, instead of handing the information and task to someone else. 

We want a tester who reviews other testers' test automation pull requests and proposes improvements both in what gets tested and how it gets tested, and understand what the automation now has. 

We want a tester who reviews application developers' pull requests for scope and risk of change, and target their activities using this information as one source of understanding what they might want the team to test. 

We want a tester who documents their lessons from spending days deeply analyzing features for problems with some tests they leave behind that run in automation. 

We want a tester who talks with documentation specialists, product management, project delivery organization and support, and turns the lessons into investigations that could leave something behind in either unit tests or system level test automation. 

We want a tester who pays attention to the state of the automated tests and volunteers to investigate and fix in the team. 

We want a tester who creates automation to do repeatable driving and monitoring of a feature the team is changing now, and analyze the insights from receptions, as well as consider throwing away the automation because it makes no sense to keep that around continuously. 

We want a tester who will spend four weeks building the most complex analog measurement test setup with resistors and power sources, and understands what of that they find relevant to include in our test automation setups. 

We want our testers to work without having to hand off everything within testing to someone else because it's the only way they imagine how both good testing and good test automation can co-exist. 

I have these testers and I grow these testers. The 14-yo intern that joined us this week has already been a tester like this while working in pairs and ensembles, and picking up tasks they can do. They've written tests in python for APIs and Robot Framework for GUIs, and found critical bugs in ongoing features. 

Hire for potential. Hire for growth. Hire for learning. Hire for attitude. 

If the attitude is missing the power of "yet" as in "I don't automate, yet", or "I don't design versatile tests, yet" it may be a better idea to invest time into someone who already harnesses the power of yet. I require working with code from a tester. But just as much, I require them to be ready to become excellent testers in its own right. 

Sunday, September 19, 2021

There are Plenty of Ways to Talk about Exploratory Testing

Out of all things Ministry of Testing, I am part of a very small corner. I speak at the independent meetups even when they fly under that flag, I speak at their sessions if they invite me (which they don't), and I am a silent lurker on a single channel, exploratory-testing, on their slack. Lurking provided me a piece of inspiration today, an article from Jamaal Todd

Jamaal asked LinkedIn and Reddit about Exploratory Testing to learn that less than 10 % of his respondents don't do exploratory testing, 50-70% give it a resounding yes and there's a fair portion of people who find it at least sometimes worth doing out of the 400 people taking time to click on the polls. 

What Jamaal's article taught me is that a lot of people recognize it as something of value, and it surprised me a little bit. After all as we can see in the responses by one of the terrible twins of testing in the slack, they are doing a lot of communication around the idea that exploratory testing does not exist.

It exists, it is doing well, and we have plenty of ways to talk about it which can be really confusing. 

When I talk of exploratory testing, I frame my things as Contemporary Exploratory Testing. The main difference in how I talk about it is that it includes test automation. Your automated tests call you to explore when they fail, and they are a platform from which you can add power to your exploration. Some of them even do part of the exploration for you.

Not everyone thinks of exploratory testing this way. The testing communities tried labeling different ideas a few decades ago with "schools of testing", and we are still hurting from that effort. When the person assigning others labels does so to promote their own true way, the other labels appear dismissive. "Context-driven" sounds active, intentional. But "Factory" is offensive. 

One of the many things I have learned on programming that naming is one of the hard problems. And a lot of times we try to think in terms of nailing the name early on, whereas we can go about and rename things as we learn their true essence. Instead of stopping to think about the best name, start with some name, and refactor. 

So, we have plenty of ways to talk about exploratory testing, what are they? 

  1. Contemporary Exploratory testing. It includes automation. Programming is non-negotiable but programming is created from smart people. The reason we don't automate (yet) is that we are ramping up skills. 
  2. 3.0 Exploratory testing. It does not exist. There is only testing. And the non-exploratory part we want to call 'checking', mostly to manage the concern that people don't think well while they focus on automation. Also known as RST - Rapid Software Testing. It is all exploratory testing.  
  3. Technique Exploratory Testing. We think of all things we have names for, and yet there is something in testing that we need to do to find the bugs everything else misses. That we call exploratory testing. Managing this technique by sessions of freedom is a convenient way to package the technique. 
  4. Manual Exploratory Testing. It's what testers who don't code do. Essential part of defining it is that automation is not it, usually motivated by testers who have their hands full of valuable work already now without automation. 
  5. Session-based Exploratory Testing. Without management with sessions exploratory testing isn't disciplined and structured. Focus on planning frequently what we use time on and ensure there is enough documentation to satisfy the organization's documentation needs we aren't allowed to renegotiate. 

Lets start with these. Every piece of writing there is on exploratory testing falls into one of these beliefs. The thing is, all that writing is useful and worth reading. It's not about one of these being better, but about you being able to make sense in the idea that NOTHING is one thing when people are involved. 

I invite you all to join the conversation on all kinds of exploratory testing at Exploratory Testing Slack. Link is available with main page of Exploratory Testing Academy


Friday, September 10, 2021

The Power of Three Plans

This week, I have taken the inspiration from discussions at FroGSConf last week, and worked on my test plans. And today, I am ready to share that instead of creating one plan, I created three - I call this the power of three. Very often different things will serve you best for what you are trying to plan for, and for the things I wanted, I couldn't do with one. 

The 1A4 Master Test Plan

The first plan I created was a master test plan. Where I work, we have this fairly elaborate template from the times when projects were large and not done in an agile fashion. That plan format overemphasized thinking of details prematurely, but has good ideas behind it, like understanding the relationship of different kinds of testing we will or will not do. 

Analyzing it, I could condense the relevant part of the plans into one A4 with many things that are specific to the fact that we are building embedded systems. 

While I don't think I wrote down anything so insightful into the plan that I could not share the actual plan I created for my project, I opt on the safer side. We did not tick some of these boxes, and with this one-glimpse plan we could see which we did not tick, and had conversation on one of them we should be ticking even if we didn't plan to. 

You can see my plan format has five segments:
  • Quality Target describes the general ideas about the product that drive our idea of how we think around quality for this product. 
  • Software Unit & Integration Testing describes the developer-oriented quality practices we choose from generally. 
  • Hardware Testing addresses the fact that there is a significant overlap and dependency we should have conversations on between hardware and system testing. 
  • System Testing looks at integrated features running on realistic hardware, becoming incrementally more complete in agile projects. 
  • Production Testing addresses the perspectives of hardware being individuals with particular failure modes, and something in assembly of a system, we have customer-specific perspectives to have conversations on. 
For us, different people do these different tests but good testing is done through better relationships between the groups, and establishing trust across quality practices. The conversations leading up to a plan have taken me months, and the plan serves more as a document of what I facilitate than a guideline of how different people will end up dealing with their interdependent responsibilities. 

We could talk about how I came up with the boxes to tick - through workshops of concepts people have in the company and creating structure into the many opinions. A visual workshop wins over writing a plan but we could talk about those in another post later. 

The System Test Strategy

The second plan I created was inspired by the fact that we are struggling with focus. We have a lot of detail, and while I am seeing a structure within the detail, I did not like my attempts of how I wrote it down before. On the course I teach for Exploratory Testing Academy, I have created a super straightforward way of doing a strategy by answering three questions and I posted a sample strategy from the course on twitter. 
I realized I had not written one like this for the project I work in, and got to work and answered those questions. This particular one I even shared with my team, and while I only got comments from one, at their perception was that it was shining light on important risks and reactions in testing. 

In hindsight, my motivation for writing this was two fold. I was thinking of what I would say to the new tester on the ideas that guide our testing as they start in a week, and I was thinking what would help me in pruning out the actions that aren't supporting us in a tight schedule we have ahead of us. 

This plan is actually useful to me in targeting the testing I do, and it might help with some in-team conversations. I accept that no document or written text ever clears it all, but it can serve as an anchor for some group learning. 

The Test Environments Plan

The third plan I produced is a plan of hardware and connections into test environments. If there is one thing that does not move very agile fashion, it is that of getting hardware. And I am not worried of the hardware we build ourselves, but on the mere fact that we ordered 12 mini-PCs off the self type in May, and we expect currently to receive them in December. There's many things in this space that if you don't design in advance, you won't have when you need it. The hardware and systems are particularly tricky in my team with embedded software, since we each have our own environment, we have many shared environments and some environments we want to ensure have little to no need of rebooting to run particular kinds of tests on. 

So I planned the environments. I drew two conceptual schematics of end-to-end environments with necessary connections, separated purposes to different environments, addressed the fact that these environments are for a team of 16 people in my immediate vicinity, and for hundreds of us over the end to end scenario. 

It was not the first time I planned environments, and the main aspects I did this week on this plan is ensuring we have what we need for new hires and new features coming in Fall '21, and that we would have better capabilities in helping project discuss cost and schedule implications to not having some of what we would need. 

The Combination

So I have three plans: 
  • The 1A4 master test plan
  • The system test strategy
  • The test environment plan
For now I think this is what I can work with. And it is sufficient to combine them into just links to each of the three. Smaller chunks are more digestible, and the audiences differ. The first is for everyone testing for a subsystem. The second is for system testing in the subsystem, in integration with other subsystems. The third is for the subsystem to be reused by the other teams this subsystem integrates with. 

I don't expect I need to update any of these plans every agile iteration we do, but the ideas will evolve while they might stand the test of time for next six months. We will see. 

Sunday, September 5, 2021

Test Plans and Templates

Imagine being assigned responsible for helping all projects and products in your organization being started on a good foot in testing. There's enough of them that you just can't imagine being there for all of them to teach them the necessary skills. You've seen a good few being lost on testing, miss out on availability of test environments and data, and projects being delayed. You want to help, give some sort of a guideline.

The usual answer to this is creating a template of some sort, to guide people through important considerations through documenting their thinking. When they document their thinking, others have a chance of saying what is missing. 

If it sounds so lucrative, why is it that these plans often fail? People don't fill in the template - finding little value in it. People fill in the template - the testing done does not match the plan; people don't read the text. And you really can't create skill of thoughtful thinking over a template. 

Yesterday at #frogsconf (Friends of Good Software), one of the conversations was on the test plans and templates we create. As I saw others examples and showed mine, I hated every single document that I had written. The documents are not the conversation that precedes good work. The documents create a shallow bind between the reader and the document, and true co-ownership would require ensemble writing of the document so that it's ours, not mine. And instead of the many words, we'd be better off with filtering the core words out that will lead our efforts. 

My strategy for test automation nowadays distills into one sentence: if we don't change anything, nothing changes for better. 

The less words we write, the more likely we are to get people to read them, hear them, internalize them and use them to guide their work. 

To create a plan, a better approach might be a visual whiteboard with as few sections to fill as possible. Allow them to find their words and concepts to explain how they will do the work. 

I shared an example from the course I have been creating, an example I have experienced to direct students into better testing the application. The problem is, I needed to do the work of testing the entire application to be able to write that document, and that is not what we can expect with projects. 

I have a few plans I need to do next week, so now is my time to think what will those plans look like. 

Tuesday, August 31, 2021

Stray Testers Unite!

 I have been observing a phenomenon: there are stray testers out there. 

It is unfortunately uncommon that testers find themselves wondering at large or being lost in what it looks like to do a good job at testing. 

For one tester, being stray manifests as them waiting for a project to truly start. There's architecture, there's requirements, there's conversations, there's strategies, there's agreements, there's team building and all that, but whenever there's testing, the early prototypes often feel better left for developers to test. And there is only so much one can support, lead and strategize. 

For another tester, being stray manifests as them following so many teams that they no longer have time for hands on work. They didn't intend to be lost in coordination and management, but with others then relying on them knowing and summarizing, it comes naturally. They make the moves but find no joy. 

For a third tester, being stray manifests as not knowing where to even start and where to head to. With many things unknown to a new professional and little support available, trying to fulfill vague and conflicting expectations about use of time and results leaves them wondering around. 

In many ways, I find we testers are mostly strays these days. We consider how *we* speak to developers but don't see developers putting the same emphasis on the mutual relationship. We navigate the increasingly complex team and system differences, figuring out the task of "find (some of the) bug that we otherwise missed". We have expectations of little time used, many bugs found, everything documented in automation while creating nice lists of bug reports in a timely manner. The ratio of our kind is down, driven to zero by making some of us assume new 'developer' identities. Finding out tribe is increasingly difficult, and requires looking outside the organization to feel less alone and different. 

Communities are important. Connections are important. Caring across company boundaries is important. But in addition to that, I call for the companies to do their share and create spaces where testers grow and thrive. We need better support in the skills and career growth in this space. We need time of our peers for them helping us and us learning together. We need the space to learn, and the expectation and support from our teams in doing that. 

Make sure you talk to your colleagues in the company. Make sure you talk to your colleagues in other companies. It will help us all. Stray testers need to unite. 

Tuesday, August 24, 2021

Social Media, a Practitioner Perspective

Someone at the company I work with invited me to share my experiences on being active on social media about work-adjacent topics, particularly with that they framed as *thought leadership on LinkedIn*. In preparing to share, I did what I always do. I opened a page in my notebook, and started scribbling a model of things I would like to share on. And since I did that step and shared internally, I realized writing it down would be necessary, to notice a slow change of mind over time. 

Where I'm at With Social Media

My story starts with the end - where I am today. 
  • 4100 connections on LinkedIn
  • 7221 followers on Twitter
  • 754 739 blog views over 775 blog posts
  • 450 conference sessions
I didn't set out to do any of this. It all grew one thing at a time. And I don't consider it a significant time investment, it merely reflects doing many little things over and over again over many many years. 

Why It Matters, Though

I recount the story of how I got my latest job. A tweet I shared announcing I'm ready to move, lovely people reaching out in significant numbers discussing opportunities, turning into creating a position together that I would accept. This was my second position built like this, with even better experience than before - I met people both on hands-on side and management before making the mutual commitment to take quality and testing forward together. This was my second position found like this, where I would not have found the job I thoroughly enjoyed without the network. This one was both found with network, and built with network. 

If I didn't have my connections, this would not be possible. 

Traversing the Timeline

Drafting over an electronic whiteboard, I drew a timeline with some of the core events. 
  • 2009 I wrote my first blog post. 
  • 2010 I joined Twitter.
  • 2015 I realized LinkedIn was not for people I know and had met but all professional connections I wanted to uphold.   
  • 2020 I started posting on LinkedIn. 
My history with social media was not that long. And while it may be now strategic, it did not start off that way. 

My whole presence of social media is a continuation of work with non-profits and communities I started in 1993. At first being one of the very few women in tech at Helsinki University of Technology turned me into a serial volunteer. Later this background made me volunteer 2001 - 2011 to Finnish Association for Software Testing. Finally I founded Software Testing Finland ry in 2014. 

Non-profits and communities were important to me for the very same reason social media is now important to me. 
I am a social learner who fast-tracks learning and impacts with networks of people. I show up to share and learn, to find my people and contents that help me forward in understanding.  
I started speaking in conferences to get over paralyzing fear of stages. 
I soon learned that best way to attract good learning conversations was to share what I was excited on, from stage. 
I started blogging to take notes of my learning. 
I later learned that traveling to stages is limiting, when your blog can be be a source of those connections too. 

My Content Approach

If my thinking takes a page or more to explain, I write a blog. 
If I have a thing to make a note of that I can say in public and it can be summarized shortly, I tweet it.
If it is too long to be a single tweet and I want to refrain from tweet storming a series of tweets, I take it to LinkedIn.
If I am seriously thinking it is good content that should be relevant for years, I publish it in one of the magazines or high traffic blog sites known for good content. 
If it can't be public, I have private support groups both inside the company and outside to discuss it. 

I don't have schedule. I have no promises on where I post and when. It emerges as I follow my energy and need of structuring my thoughts. 

Making Time

My most controversial advice is probably around how to make time. 

I have two main principles on how I make time:
  1. No lists. When urge of writing it down to a list hits me, I can just write it to a proper channel. Time to lists is time away from publishing the finished piece.
  2. Write only. I use all of the public channels as write only media. I very often mute the conversations, and if I have time and energy, go back to seeing what is going on. Sometimes I mute on the first annoying comment. And I have taken coaching to find that peace in not arguing and explaining, but expecting people showing up on my mentions for a conversation to approach with curiosity and acceptance that I am not always available. 
I read things, but on my terms and schedule. I read what others write for a limited amount of time and what I see is based on luck and some of the algorithms making choices for me. I read comments and make active choices of when to engage. 

Social media is not my work. I have a testing job with major improvement responsibilities to run. Social media is a side thing. 

Deciding on What You Can Say

Finally, we talk about what we can say. I have carefully read and thought about the social media guidelines my employers have, and seek to understand the intent. My thinking on what to say is framed around professional integrity and caring for people and purpose. Professional integrity means for me that I can discuss struggles and challenges, as long as I feel part of the solutions we are working on. Caring for people means I recognize many of my colleagues read what I write and recognize themselves even in writing I did not think are about them but general challenges that many people recognize. Caring for purpose means thinking about how to do no harm while maintaining integrity. 

We all choose a different profile we are comfortable projecting. What you see me write may appear unfiltered, but I guarantee it is not. 

The impacts of sharing openly are varied. Sometimes knowing what thoughts I am working through is an invitation to people to join forces and see perspectives.  Most often people trust my true enthusiasm on solving each and every puzzle, including ones involving compromises. Sometimes I have offended people.  I've appreciated the one who opened a conversation on the feelings my reflections raised as some of my wishes seemed impossible at a time.

I also remember well how one of my blog posts caused a bit of a discussion in my previous place of work. I still maintain my stance: it was a blog post about how test automation very often fails when done on side to reach the value in projects, a problem I have lived through multiple times. But it was written at a time when someone felt that sting of failure on their technical success was too much to handle. 

I apologize when I hurt people, and I don't apologize for their feelings being hurt, but I work to understand what it was that I do. Apologizing comes with a change I will try to make. 

Final Words

When we were discussing me joining my current organization, my recruiting manager called to an old manager of mine from over 10 years ago. The endorsement came with a realistic warning: Maaret is active on social media, and you want to be aware of that. 

I was hired regardless, and love my work. It is always ups and downs, and being visible/vocal holds power I try to use with respect. 

The platform is part of building a career instead of holding a job. Be who you want to be on social media so that it supports your career. My version is just my version, and yours should look like you. 

Wednesday, August 18, 2021

Future of Testing and Last Five Years?

This morning opened up with Aleksis Tulonen reminding a group of people that he asked us five years ago a question on the future of testing in five years. 

I had no idea what I might have said five years ago, but comparing to what I am willing to say today, the likelihood of saying something safe is high. 

So, what changed in five years? 

  • Modern Agile. I had not done things like No product owner, No estimates and No projects at the last checkpoint. I have done them now. And I have worked in organizations with scale. These rebel ideas have been in scale of the team I work in with relevant business around us. 
  • Frequent Releases. It may be that I learned to make that happen, but it is happening. I've gone through three organizations and five teams, and moved us to better testing through more frequently baselining quality in production with a release. And these are not all web applications with a server, but globally distributed personal computers and IoT devices are in my mix.  
  • Integrated engineering teams without a separate testing group. Testers are in teams with developers. They are still different but get along. Co-exist. Separate testing groups exist in fewer places than before. Developers at least help with testing and care on unit testing. You can expect unit testing. 
  • Exploratory includes automation. Profile of great testers changed into a combo of figuring out what to test and creating code that helps test that. The practice of "you can't automate without exploring. You can't explore (well) without automating." became day to day practice in my projects. 
  • BDD talk. BDD became a common storyline and I managed to avoid all practical good uses. I tried different parts of it but didn't get it to stick. But we stopped using the other words as commonly - specification by example and acceptance test driven development lost the battles.  
  • Ensemble Testing and Programming. It moved from something done at Hunter to something done in many but still rare places. I made it core to my teaching and facilitating exploratory testing in scale at organizations I work at. And it got renamed after all the arguments on how awful 'mobbing' sounds. The term isn't yet won over completely but it has traction. 
  • Testing Skill Atrophy. Finding people 'like me' is next to impossible. Senior people don't want to do testing, only coach and lead testing or create automation. Senior testers have become product owners or developers or quality coaches but rarely stay in hands-on testing. We are more siloed within testing than we were before. And finding "a tester" can mean so many things that recruiting is much harder these days. 
  • Developers as Exploratory Testers. Developers started testing and in addition to small increments - test after cycles taking us to good level of unit testing without TDD, developers started driving and contributing in exploratory testing on different scopes of the system. They were given the permission to do 'overlapping' work and run further than testers got in the same timeframe. 
  • Test Automation University. Test automation became the go-to source for new testers to learn stuff around. Test Automation University, headmastered by Angie Jones and sponsored by Applitools, became a bazaar for materials on many different tools. 
  • Open-Source to Fame. Everyone has their own tool or framework or library. Everyone thinks they are better than others. Very few really know the competition, and marketing and community building is more likely to lead to fame. Starting something became more important than contributing to something. 
  • Browser Driver Polite Wars. Options to Selenium emerged. Selenium became standard and even more options emerged. People did a lot of browser testing and Cypress made it to JS-world radar for real. Playwright started but is in the early buzz. Despite options that are impossible to grasp to so many people in development efforts (there's other stuff to focus on too!) people mostly remained respectful. 
  • Dabbles of ML. First dabbles into machine learning in testing space emerged. This space is dominated by commercial, not open source. And programming was "automated" with Github Copilot that translates well-formulated intentions as comments into code machine learned from someone else. Applications with machine learning became fairly commonly available and bug fixing for those systems became different. 
  • Diversified information. There are more sources for information than ever before, but it is also harder to find. Dev.to and self-hosted blogs are the new go-to, in addition to written content video and voice content has become widely available. Difficult part is to figure out what content makes sense to give time to and we've seen the rise of newsletter aggregators in testing field. 
  • One Community is No More. Some communities have become commercial and in many ways, resemble now tool vendors. Come to our site, buy our thing - paywalls are a thing of the day. At the same time, new sources have emerged. There is no "testing community", there are tens of testing communities. Developer communities have become open to testing and choosing to hang out with people in the language you work in has become something testers opt in for more.
  • Twitter Blocks Afoot! While visible communication is more civil and less argumentative than before, people block people with a light touch. If blocking someone five years ago was an insult, now it is considered less of an insult and more of a statement of curating the things you end up reacting to in the world.
  • Women in Testing. The unofficial slack community grew and became a globally connected group of professionals. Safe space with people like me enabled me to start paying attention to contents of men again and saved many people from feeling alone and isolated in challenging situations. The community shows up at conferences as group photos.
  • DevOps. It is everywhere. The tools of it. The culture of it. The idea that we pay attention to 'testing' in production (synthetic tests and telemetry). 'Agile' became the facilitator wasteland and developers of different specialties grouped their agile here. 
  • Cloud. Products went to cloud first. Supporting tools followed in suite. And the world became cloud-native for many corners of the software development world.
  • Mobile & API. These became the norm. REST APIs (or gRPC) in the IDE is the new UI testing. Mobile has separate language implementations for presentation layers and forced us to split time on Web / Mobile. 
  • Crowdsourcing. It remains but did not commoditize testing a lot. I find it almost surprising, and find this to be a hope for a better future where testing is not paying user to hang out with our applications to pay them peanuts for time and bigger peanuts if they were lucky to see a bug. 
I most likely forgot a trend I should have named, but the list of reflections is already long. But back to what I predicted. 
I don’t think much will change yet in 1, 3 or 5 years, other than that our approaches continue to diversify: some companies will embrace the devops-style development and continuous testing, while others figure ways for releasing in small batches that get tested, and others do larger and larger batches. Esp. in the bespoke software sector, the forces of how to make money are so much in favor of waterfall that it takes decades for that to change.

But if there is a trend I’d like to see in testing, it’s towards the assumption of skill in testing. Kind of like this:

- black-hat hackers are people who learn to exploit software for malice.

- white-hat hackers are double agents who exploit software for malice and then give their information to the companies for good

- exploratory testers are white-hat hackers that exploit software for any reason and give that information for companies. From compromise to more dimensions like “hard to use”, “doesn’t work as intended”, “annoys users”.

- because exploratory testers are more generalized version of hackers, exploratory testing will go away at the same time as software hackers go away. You get AI that write programs, but will need exploratory testers to figure out if programs that are produced are what you want.

I don't think I see my hope of "assumption of skill in testing". I see better programmers who can test and a few good testers. Being a developer with testing specialty is one of the entry level roles and everyone is expected to pick up some programming. Acceptance testing by non-programmers remains, is lower paid and has different time use profile - as much time, but in small deliveries and better quality to begin with. 

My bets on AI will come for the programmer jobs barely fit the 5-year window, but from where we got now, I wouldn't say I am wrong on it. Then again, testers became programmers and we started understanding that programmers don't write code 7.5 hours a day. 

Next five years? Will see how that goes. 

Conflicted in Bug Reporting

In 1999 I was working on writing a research paper on bug reporting systems. I was motivated by the fact that the very first project I started my career on (Microsoft) did the bug reporting tooling so well compared to anything and everything I had seen since and their tool was not made available for the public. It still isn't. 

With a lot of digging into published material, I found one thesis work that included public descriptions of what they had, but it was limited. Thus I read through a vast amount of literature on the topic otherwise and learned bug classification was academia's pet. 

I never finished the research paper, but everything I learned in the process has paved my way to figuring things our in projects ever since. And with another 22 years into really looking at it and thinking of it, I have come to disagree with my past self. 

The current target state for bug reporting that I aspire to lead teams into is based on a simple principle:
Fix and Forget. 
I want us to be in a state where bug reporting (and backlogs in general) are no longer necessary. I want us to have so few bugs that we don't need to track them. When we see a bug, we fix the bug and improve our test automation and other ways of building products across various layers enough to believe that if the problem would re-emerge, we'd catch it. 

With a lot of thought and practice, I have become pretty good at getting to places where we have zero bugs on our backlogs, and as new ones emerge, we invest in the fix over the endless rounds of prioritizing and wasted effort that tracking and prioritizing creates. 

At this point of my career, I consider the efforts to classify organizations internally discovered bugs something we should not do. We don't need the report, we need the fix. We don't need the classification, we need the retrospective actions that allow us to forget the detail while remembering the competence, approach and tooling we put in place. 

At this point of my career, the ideas that I used to promote in 1999 with bug reporting database and detailed classifications for "improvement metrics" I would consider a leap back in time and a reason of choosing something else to do with the little time I have available and under my control. 

I think in terms of opportunity cost - it is not sufficient that something can be done and is somewhat useful when done. I search for choosing the work to do that includes ideas about other work that could be done in the same time if this was not done.

Instead of reporting, reading and prioritizing a bug, we could fix the bug.
Instead of clarifying priorities by a committee, the committee could teach developers to make priority decisions on fixes.
Instead of reporting bugs at all from internal work, introduce "unfinished work" for internal bugs 
Instead of expecting large numbers of bugs to manage, close every bug within days or weeks with a fix or decision. 
Instead of warehousing bugs, decide and act on your decisions.  

Friday, July 30, 2021

How Would You Test A Text Field?

Long, long time ago we used a question "How would you test a text field?" in interview situations. We learned that there seemed to be a correlation of how well the person had their game together to test for such a simple question, and we noted there were four categories of response types we could see, repeatedly. 

Every aspiring tester and a lot of developers aspiring to turn into testers approached the problem as simple inputs and automate it all approach. They imagined data you can put into the field, and automating data when there is a simple way of imagining recognizing success is a natural starting point. They may imagine easily repeatable dimensions even like different environments or speed, and while they think in terms of automating, they generally automate regression not reliability. Typical misconceptions include thinking hardware you run on always matters (well, it may matter with embedded software and some functionalities we use text fields for) or someone else telling them what to test. It used to be that they talked about someone else's test cases, but with agile, the replacement word is now acceptance criteria. Effectively, they think testing is checking against a listing someone else already created, when it is at most half the work. 

Functional testers are only a notch stronger than aspiring testers. They come packed with more ideas, but their ideas are dull - like writing SQL into a text field in a system that has no database. It only matters if there is a connection to SQL database somewhere further down the line. So while the listing of things we could try has more width, it lacks in depth of understanding what would make sense to do. Typical added dimensions for functional testers are environments, separating function and data, seeing function from the interface (like enter vs. mouse click), and applying various kinds of lists and tools that focus on some aspect like html validators or accessibility checkers or security checkers. Usually people in this category also talk about what to do with the information that testing provides and writing good bug reports. On this level, when they mention acceptance criteria, they expect to contribute to it. 

The highest levels are separated only by what the tester in question starts with. If they start with the *why would anyone use this?* and continue questioning not only what they are told but what they think they know based on what they see, they are Real senior testers, putting every opportunity to test in context of a real application, a real team, and a real organization with real business needs and constraints. If they start with showing off techniques and approaches and dimensions of thinking, they still need work on the *financial motivations of information* dimension. The difference to Close to Senior tester level is in prioritizing in the moment, which is one of the key elements of good and solid exploratory testing. Just because something could be tested it does not mean it has to be, and we make choices on what we end up testing every time we decide on our next steps. 

If we don't have multidimensional ideas of what we could do, we don't do well. If we don't put our ideas in an order where we are already doing the best possible work in the time available when we stop without exhausting our ideas, we don't do well. 

With years of experience with the abstract question, I started moving to making the question more concrete and sharing something that was a text field on the screen and asking two questions:

  • What do you want to test next?
  • What did you learn from this test? 
I learned that the latter question in general helps people do better testing than they would without the coaching that sort of takes place there, but I don't want to hire a tester who is so stuck on their past experiences that they can't take in new information and adapt. I've use four text fields as typical starting points:
  1. Test This Box. This is an application that is only a text field and a button, and provides very little context around it. Seniors do well in extracting theory of purpose, comparing it to given purpose, deal with the idea that it is first step to incrementally building the application, learn that while the field is not used (yet), it already displays and that the application has many dimensions in which it fails that are not intended. 
  2. Gilded Rose. This is a programming kata, a function that takes three inputs, and inputs could just as well be text fields. Text field is just an interface. The function has a clear and strong perspective to code coverage but also risk coverage - like who said you weren't supposed to use hexadecimal numbers? Using this text field I see ability to learn and this is my favorite one when selecting juniors I will teach testing but who will need to be picking up guidance from me. Also, if you can't see that code and IDE is just a UI when someone is helping you through it, I feel unsure in supporting you in growing to be a balanced contemporary exploratory tester who documents with automation and works closely with developers.  
  3. Dfeditor animations pane. This is a real size application where UI has text fields, like they all do. The text field is in context of a real feature, and a lot of the functionality is there by convention. This one reveals me if people discover functionality, and they need to be able to do that to do well in testing. 
  4. Azure Sentiment API. This is an API with a web page front end, but ML implementation recognizing sentiments of text automatically. This one is hardest to test and makes aspiring testers overfocus on data. For seniors it really reveals if people can make a difference between feedback that can be useful and feedback that isn't so useful through connections of business and architecture. 
Watching people in interviews and trainings, my conclusion is that more practice is still needed. We continue to treat testing as something that is easy and learned on job without much focus. 

If I had the answer key to where bugs are, wouldn't I trust that the devs can read it too and take those out? But alas, I don't have the answer key. My job is to create that answer key. 

Thursday, July 29, 2021

Tester roles and services

An interesting image came across on my twitter timeline. It looked like my favorite product management space person had been thinking and modeling, and created an illustration of the many hats people usually have around product creation. Looking at the picture made me wonder where is testing? Is it really that one hat for one category of hats? Is it the reverse side of every single one of these hats? Don't pictures like this make other people's specialties more invisible?

As I was talking about this with a colleague (like you do when you have something on your mind), I remembered I had created a listing of the services testing provides where I work. And reading through that list, I could create my own image of the many hats of testing, 

  • Feature Shaper focuses on hat we think of as feature testing. 
  • Release Shaper focuses on what we think of as release testing. 
  • Code Shaper focuses on what we think of as unit testing. 
  • Lab Technician builds systems that are required to test systems. 
  • On-Caller provides quick feedback on changes and features so that no one has to carry major responsibilities alone.  
  • Designer figures out how we know what we don't know about the products. 
  • Scoper ensures there's less promiseware and more empirical evidence. 
  • Strategist sets us on a journey to the future we want for this project, team and organization. 
  • Pipeline Architect helps people with their chosen tools and drives the tooling forward. 
  • Parafunctionalist does testing on the top skills areas extending functional: security, reliability, performance and usability. 
  • Automation Developer extends test automation just as application is extended. 
  • Product Historian remembers what works and what does not and if we know so that we know. 
  • Improver tests product, process and organization and does not stop with reporting but drives through changes. 
  • Teacher brings forward skills and competencies in testing. 
  • Pipeline Maintainer keeps pipelines alive and well so that a failing test ends up with an appropriate response. 
With all these roles, the hats overall in my team are distributed to entire team, but already create a reality where no two testers are exactly the same. And why should they be: we figure out the work that needs doing in teams where everyone tests - just not the same things, the same way. 

Wednesday, July 28, 2021

The Most Overused Test Example - Login

As I am looking for a particular slide I created to teach testing many, many years ago, I run into other ones I have used in teaching. Like the infamous, most overused test example in particular in the test automation space - the login.

As I look at my old three levels of detail example, I can't help but to laugh at myself. 

Honestly, I have seen these all. And yet while it is only a year since I last tested a login that was rewritten, I had zero test cases I wrote down.

Instead, I had to find a number of problems with the login:

  • Complementing functions. While it did log me in, it did not log me out but pretended it did. 
  • Performance. While it did log me in, it took its time. 
  • Session length. While it did log me in, the two different parts of it disagreed on how long I was supposed to be in, resulting in fascinating symptoms while being logged in long enough combined with selected use of features.  
  • Concurrency. While it did log me in, it also logged me in a second time. And when it did so, it got really confused on which one of me did what. 
  • Security controls. While I could log in, the scenarios around forgetting passwords weren't quite what I would have expected. 
  • Multi-user. While it logged me in, it did not log me out fully and sharing a computer for two different user names was interesting experience. 
  • Browser functions. While it logged me in, it did not play nicely with browser functions remembering user names and passwords and password managers. 
  • Environment. While it worked on test environment, it stopped working on test environment when a component got upgraded. And it did not work in production environment without ensuring it was setup (and tested) before depending on it. 
I could continue the list far further than I would feel comfortable. 

Notice how none of the forms of documenting testing suggest finding any of these problems. 

Testing isn't about the test cases, it's about comparing to expectations. The better I understand what I expect, the better I test. And like a good tester, if I know what I expect, I tell it in advance and it still allows me to find things I did not know I expect - with software under test as my external imagination. 

Feeling Testing

I have noticed I feel testing from three perspectives. 

  • I do 1st person testing for any and all software that I personally create. 
  • I do 2nd person testing for my work as a testing specialist in a software team.
  • I do 3rd person testing by using software. 

I feel very different about testing depending on which perspective I take.

When I do 1st person testing, I don't care where testing begins, it is everywhere. I explore my problem, identify the next piece of capability I will be adding, and test everything I rely on as well as what I create. When people later tell me things I already know of, I am annoyed. When people later tell me things that surprise me, I'm delighted. I appreciate everything more if they walk with me, implement with me and not just guide me from the side without walking in these shoes. What helps me do well is having testing always there and not only after it's otherwise complete. Applying my 2nd person hat while doing 1st person testing happens naturally with meetings and end of days interrupting me. I used to hate this feeling of knowing a few ways to go forward and a hundred ways of going backward. With years in sitting with the feelings I deal with it a little better. 

When I do 2nd person testing, I go for actively figuring out what the 1st person could have missed. Even when working in close collaboration, I step away from our mutual agreements. I create new models to see what we're missing. I start with the why (value), pay equal attention to what this depends on and what we're creating, and use all of my energy in aspiring for better. I care of timing of my feedback, but I care for sufficiency (completeness) even more. I seek conversations, and expect conversations to change things. Good conversations - clarity of bugs reports, mutual learning - is what gives me joy. 

When I do 3rd person testing, every single problem annoys me. I carefully tread in ways that don't show me the most likely problems, because I'm doing my thing not testing thing. Users stumble on problems, testers go seek them. If I find and report, it comes with extra - visibility they wouldn't want or a request to compensate for my losses of service. 

The world is currently moving more and more work from 2nd person testing to 3rd person and 1st person testing. We know the 3rd person isn't willing to accept the work and do free labor. We know the 1st person needs that pair because software development is a social effort and working together helps us move (and learn) faster. 

I still feel testing is awesome - and too important to be left just for testers by profession. Looking forward to seeing where the mix goes. 

Friday, July 23, 2021

Ensemble Programming as Idea Integration

This week with Agile 2021 conference, I took in some ideas that I am now processing. Perhaps the most pressing of those ideas was from post-talk questions and answers session with Chris Lucian where I picked up the following details about Hunter ensemble (mob) programming:

  • Ideas of what to implement generally come to their team more ready than what would be my happy place - I like to do a mix of discovery and delivery work and would find myself unhappy with discovery being someone else's work. 
  • Optimizing for flow through a repeatable pattern is a focus: from scenario example to TDD all the way through, and focus on the talk is on habits as skill is both built into a habit and overemphasized in the industry
  • A working day for a full-time ensemble (mob) has one hour of learning in groups, 7 hours of other work split to a timeframe of working in rotations, pausing to retrospect and taking breaks. Friday is a special working day with two hours of learning in groups. 
The learning time puzzled me in particular - it is used on pulling knowledge others have, improving efficiency and looking at new tech. 
A question people seem to ask a lot about Ensemble Programming (and Testing) is if this would be something we do full-time and that is exactly what it is as per accounts from Hunter that originated the practice. Then again, with all the breaks they take, the learning time and the continuous stops for retrospective conversations, is that full time? Well, it definitely fills the day and sets the agenda for people, together. 

This lead me to think about individual contributions and ensembling. I do not come up with my best ideas while in the group. I come up with them when I sit and stare a wall. Or take a shower. Or talk with my people (often other than the colleagues I work with) explaining my experience trying to catch a thought. Best work-related ideas I have are individual reflections that, when feeling welcome, I share with the group. They are born in isolation, fueled by time together with people, and implemented, improved and integrated in collaboration. 

Full 8 hour working days with preset agenda would leave thinking time to my free time. Or making a change in how the time is allocated so that it fits. With so much retrospectives and a focus on kindness, consideration and respect, things do sound like negotiable when one does not fold under group's different opinions. 

I took a moment to rewatch amazing talk by Susan Cain on Introverts. She reminds us: "Being best talker and having best ideas has zero correlation.". However, being the worst talker and having the best ideas also has zero correlation. If you can't communicate your ideas and get others to accept them and even amplify them, your ideas may never see the light of day. This was particularly important lesson for me on Ensemble Programming. I had great ideas as a tester who did not program, but many - most - of my ideas did not see the light of day.

Here's the thing. In most software development efforts, we are not looking for the best ideas absolutely. But it would be great that we don't miss out on the great ideas we have in the people we hired just because we don't know how to work together and hear each other our.

And trust me - everyone has ideas worth listening to. Ideas worth evolving. Ideas that deserve to be heard. People matter and are valuable, and I'd love to see collaboration as value over competitiveness.

Best ideas are not created in ensembles, they are implemented and integrated in ensembles. If you can’t effectively bind together the ideas of multiple people, you won’t get big things done. Collaboration is aligning our individual contributions while optimizing learning so that the individuals in the group can contribute their best. 

Tuesday, July 20, 2021

Mapping the Future in Ensemble Testing

Six years ago when I started experimenting with ensemble testing, one of the key dynamics I set a lot of experiments around was *taking notes* and through that, *modeling the system*. 

At first, I used that notetaking/modeling as a role in an ensemble, rotating in the same cycle as other roles. It was a role in which people were lost. Handing over document that you had not seen and trying to continue from it was harder than other activities, and I quickly concluded the notes/model were something that for an ensemble to stay on common problem, this needed to be shared. 

I also tried a volunteer notetaker who would continuously be describing what the ensemble was learning. I noticed a good notetaker became the facilitator, and generally ended up hijacking control from the rest of the group by pointing out in a nice and unassuming way what was the level of conversation we were having. 

So I arrived at the dynamic I start with now in all ensemble testing sessions. Notetaking/modeling is part of testing, and Hands (driver) will be executing notetaking from the request of the Brains (designated navigator) or Voices (other navigators). Other navigators can also keep their own notes of information to feed in later, but I have come to understand that in a new ensemble, they almost never will, and it works well for me as a facilitator to occasionally make space for people to offload the threads they model inside their heads into the shared visible notes/model. 

Recently I have been experimenting with yet another variation of the dynamic. Instead of notes/model that we share as a group and use Hands to get visible, I've allowed an ensemble to use Mural (post-it wall), on the background to offload their threads with a focus on mapping the future they are not allowed to work on right now because of the ongoing focus. It shows early promise of giving something extra to do for people who are Voices, and using their voice in a way that isn't shouting their ideas on top of what is ongoing but improving something that is shared.

Early observations say that some people like this, but it skews the idea of us all being on this task together and can cause people to find themselves unavailable for the work we are doing now, dwelling in the possible future. 

I could use a control group that ensemble together longer, my groups tend to be formed for a teaching purpose and the dynamics of an established ensemble are very different to the first time ensemble. 

Experimenting continues. 

Wednesday, July 14, 2021

Ensemble Exploratory Testing and Unshared Model of Test Threads

When I first started working on specific adaptations of Ensemble Programming to become Ensemble Testing, I learned that it felt a lot harder to get a good experience on exploratory testing activity than on a programming activity, like test automation creation. When the world of options is completely open, and every single person in the ensemble has their own model of how they test, people need constraints that align them. 

An experienced exploratory tester creates constraints - and some even explain their constraints - in the moment to guide the learning that happens. But what about when our testers are not experienced exploratory testers, nor experienced in explaining their thinking? 

When we explore alone, we start somewhere, and we call that start of a thread. Every test where we learn creates new options and opportunities, and sometimes we  *name a new thread* yet continue on what we were doing, sometimes we *start a new thread*. We build a tree of these threads, choosing which one is active and manage the connections that soon start to resemble a network rather than a tree. This is a model that guides our decisions on what we do next, and when we will say we are done. 

The model of threads is a personal thing we hold in our heads. And when we explore together in ensemble testing, we have two options:

  1. We accept that we have personal models that aren't shared, that could cause friction (hijacking control) 
  2. We create a visual model of our threads
The more we document - and modeling together is documenting - the slower we go. 

I reviewed various ensemble testing sessions I have been facilitating, and noticed an interesting pattern. The ensemble was more comfortable and at ease with their exploratory testing if I first gave them a constraint of producing visible test ideas before exploring. At the same time, they generally found less issues to start conversations on, and held stronger to the premature assumptions they had made of the application under test. 

Over time, it would be good for a group to create constraints that allow for different people to show their natural styles of exploratory testing, to create a style the group shares. 

Wednesday, July 7, 2021

Working with Requirements

Long, long time ago I wrote down a quote that I never manage to remember when I want to write it down. Being on my top-10 of things I go back to, I should remember it by now. Alas, no. 

"If it's your decision to make, it's design. If it's not, it's a requirement." - Alistair Cockburn

Instead, I have no difficulties in recalling the numerous times someone - usually one of the developers - says that something *was not a requirement* is overwhelming. With all these years working to deliver software, I think we hide behind requirements a lot. And I feel we need to reconsider what really is a requirement. 

When our customers ask us of things they want in order to buy our solution, there's a lot of interpretation around their requirements. I have discovered we do best with that interpretation when we get to the *why* behind the *what* they are asking, and even then, things are negotiable much more often that not. 

In the last year, requirements have been my source of discontent, and concern. In the first project we delivered together, we had four requirements and one test case. And a thousand conversations. It was brilliant, and the conversations still pay back today. 

In the second project we delivered together, we had more carefully isolated requirements for various subsystems, but the conversation was significantly more cumbersome. I call it success when 90% of the requirements vanished a week before delivery, while scope of the delivery was better than those requirements let us to believe. 

In another effort in the last year, I have been going through requirements meticulously written down and finding it harder because of the requirements to understand what and why we are building.

Requirements, for most part of them, should be about truly the decisions we can't make. Otherwise, let's focus on designing for value. 

Thursday, July 1, 2021

Learning while Testing

You know how I keep emphasizing that exploratory testing is about learning? Not all testing is, but to really do a good job of exploratory testing, I would expect centering learning. Learning to optimize value of our testing. But what does that mean in practice? Jenna gives us a chance of having a conversation on that with her thought experiment: 

When I first came about Jenna's thought experiment, I was going to pass it. I hate being on the spot with exercises where the exercise designer holds the secret to what you will trip on. But then someone I admire dared to take the challenge on in a way that did not optimize for *speed of learning* and this made me wonder what I would really even respond. 

It Starts with a Model

Reading through the statement, a model starts to form in my head. I have a balance of some sort that limits my ability to withdraw money,  a withdrawal of some sort that describes the action I'm about to perform, an ATM functionalities of some sort, and a day that frames my limitation in time. 

I have no knowledge on what works and what does not, and I don't know *why* it would matter that there is a limit in the first place. 

The First Test

If I first test a positive case - having more than that $300 on my balance, withdrawing $300 expecting I get the cash at hand and then any smallest sum on top of that that the functionalities of the ATM allow for, I would at least know the limit can work. But that significantly limits anything else I can learn then on the first day. 

I would not have learned anything about the requirement though. Or risks, as the other side of the requirement. 

But I could have learned that even the most basic case does not work. That isn't a lot of learning though. 

Seeing Options

To know if I am learning as much as I could, it helps if I see my options. So I draw a mindmap. 

Marking the choices my 1st test would be making makes it clear how I limit my learning about the requirements. Every single branch in itself is a question of whether that type of a thing exists within the requirements, and I would know little other than what was easily made available. 

I could easily adjust my first test in at least giving myself a tour of the functionalities the ATM has before completing my transaction. And covering all ways I can imagine going over after that first transaction getting me to limit would lift some of the limitations I first see as limiting learning over time. 

Making choices on learning

To learn about the requirement, I would like to test things that would teach me around the concepts of what scope the limit pertains to (one user / one account / one account type / one ATM) and what assumptions are built into the idea of having a daily limit with a secondary limit through balance. 

For all we know, the way the requirement is written, it could be ATM specific withdrawal limit! 

Hands on with the application, learning, would show us what frames its behavior fits in, and without time to test first things out, I would just want to walk away at this point. 

Wednesday, June 30, 2021

Social Programming Approaches

A year ago, I was preparing for a keynote coming up in autumn 2020 where I had promised to talk about social software testing approaches. Organizing my experiences, I wrote an article for bbst.courses  and started the work of replacing mob and mobbing with ensemble and ensembling. 

I described four prominent social software testing approaches, two for groups and two for pairs. 

  • Ensemble testing 
  • Bug bash
  • Traditional pair testing
  • Strong-style pair testing
Today, I started thinking through the programming equivalents of these, and how I have come to make sense in the concepts. I have long ago accepted that everyone will use words as they please, and more words in explaining how we think around the words is usually better. Today I wanted to think through:
  • Traditional and Strong-Style Pair Programming
  • Ensemble programming
  • Swarming
  • Code retreats
  • Coding dojos
  • Hackathons and Codefest
  • Bug Bust

Bug Bust is a new type of event that resembles Bug Bash on testing side, focused on cleaning up easily identifiable code bugs. I have not seen anyone run one yet, but I noticed AWS is setting this up a thing to do, and time will tell if that turns out to be popular. 

Hackathon is the closest programming equivalent to bug bashes on the testing side. Hackathons come to programming with the idea of creating a program in a limited timeframe, usually in a competitive setting. A hackathon comes with a price, seeks survival of the fittest solution, and seeks to impress. I generally hate it for its competitive nature. It says there is a team but how the team works is open. The competition is between teams. Similar time-boxed things without the competitiveness have been dubbed as "20% time" and codefest [Hilkka Huotari writes about this here in Finnish], still seeking this idea of bringing together small teams on something uncertain yet relevant but a bit off the usual path of what we're working on for limited amount of time for emergent teams

Coding dojos are practice-oriented group sessions. Having experienced some, I have come to think of them as pair working and group watching, with rotation. Truth be told, coding dojos are a predecessor to ensemble programming.  Coding dojos usually happened in combination with code katas, rehearsal problems we would do in this format. 

Code retreats are also practice-oriented group sessions, that are organized around repeating solving same problem in code under various constraints. The usual problem I have experiences some tens of times now is the game of life, and there's whole books on constraints we could try and introduce. A constraint could be something like taking so small steps that tests pass at 2 minutes and gets checked in, or the solution needs to seek a simpler, smaller step. The work is done in pairs, pairs switched between sessions, and the learning experience framed with regular retrospectives to cross-pollinate ideas and experiences. 

Swarming is a team work method that I differentiate from ensemble programming with the idea that it brings together subgroups rather than teams, and is inherently temporary in nature. The driving force for swarming is a problem that needs attention from multiple people. I have come to swarming from context of kanban boards and work-in-progress limits needing special attention to one of the activities we model, and swarming is a way of ensuring we can get the work moving from where it has been piling up. 

Ensemble programming is about the entire team working as a team together on one task at a time. To work on the same task, it usually means sharing one keyboard. A new group ensembling looks an awful lot like a coding dojo, with guardrail rules to enforce a communication baseline, except we may also be working on production code related tasks, not just rehearsal problems. A seasoned group communicates on an entirely different level, and the dynamic looks different. 

Strong-style pairing is ensemble programming as a pair. It asks for the same rule as the ensemble has where ideas and decisions come from the person off the keyboard

Traditional pairing is the pairing where one on the keyboard is doing programming and the other off keyboard is reviewing and contributing

We have plenty of options for social programming available to us. Social programming is worthwhile because we don't know what we don't know, but sharing the context of doing something together brings us those serendipitous learnings, in the context of doing. How much and what kind of social programming you sprinkle into your teams is up to you. 

Tuesday, June 1, 2021

Scaling by the Bubble

At work, there are many change programs ongoing, to an extent that it makes me feel overwhelmed. 

We have a change program I walked away from that tries to change half the organization by injecting a process with Jira. I walked away as I just felt so disconnected with the goals and decided that sleeping better and being true to my values would always win over trying to fix things in some kind of scale. Me walking away gives those who continue a better chance of completing and we can come back to reflect the impacts on some appropriate scale.

We have a change program I just volunteered with seeking benefits of platforms and product lines, and I still believe it could be a nice forum of likeminded people to figure out alignment. 

And we have a change program where we audit and assess teams for their implementations of whatever the process is, giving me a chance to also consider relationship of process and results. I volunteer with that one too.

But in general, I have come to understand that I make major changes in organizations from a very different style than what we usually see. And as I just listened to Woody Zuill mention the same style giving it a name 'the Bubble', I felt like I need to write about scaling by the Bubble.

The basic idea as I see it with 'the Bubble' instead of starting where it is hard - in scale - we start where it is possible. Injecting someone like me into a team that needs to change towards continuous delivery, modern agile and impact/value oriented way of working with streams of value that can be reused over products is an intervention introducing a start of the Bubble. 

My bubble may start with "system testing", but it soon grows to "releases", "customer experience", "requirements", "conversations" and through "technical excellence" to "developer productivity". Instead of planning the structure we seek, we discover the structure by making small shaping changes continuously. We protect the change by 'the Bubble', creating interfaces that simplify and focus things in the bubble. And we grow the bubble by sharing results that are real, recent and local to the organization. 

Having been around in organizations, I see too much top-down improvements (failing) and not enough bubble-based improvements. 

My bubble now is trying to change, over the next two years, culture and expectations in scale. Every day, every small improvement, makes tomorrow a little better than today. Continuous streams make up great changes. 

Saturday, May 29, 2021

Scale - Teaching Exploratory Testing

At work, I hold three roles within a position of a principal test engineer: a tester, a test project facilitator and a test competence facilitator. I have 37.5 hrs a week to  invest in all those three, and while my productivity has soared compared to early days of my career, days feel just as short now as they ever did. 

Some of my hobbies resemble my work, but they reach outside the organization: speaking, writing, organizing, teaching, reading. I deliver an occasional paid training and paid keynote. I write my book. I try to record my podcast with limited success. I structure my thoughts into talks and articles.

At this point of my life and career, I chose my theme of the year to be scale. Scale of impacts I induce at work I care for. Scale of teaching forward what I have learned. Enabling others in scale. 

With close to 450 sessions delivered for audiences that were not my work, more people know me than who I recognize. Things like someone telling me when we first met while I can name them only for the pairing we did in the last month are all too common. It's not that I forget people, it's that I have never come to remember everyone. 

As I reflect on my goal of scale, I come to an idea of what scale would look like for my teaching. It would look like me teaching a network of teachers who teach forward. I already took some steps towards this launching https://www.exploratorytestingacademy.com where all course material is Creative Commons Attribution, allowing you all to use it for your business, even to make a business of your own. 

I make time to teach free courses every now and then, like during the Exploratory Testing Week. I make time to teach commercial courses every now and then, as my side job, but my availability is limited as I love my day-job and the assignment that allows me to choose in transforming quality and testing. 

I need other people, willing to teach, that I would teach my exercises, my materials and adjust them to what they feel they want to take forward. I have a lot of theory and example material, as slides. Like with Exploratory Testing Foundations, I make them available at Exploratory Testing Academy. But I also have a lot of experiential exercises, where facilitating and framing the exercise is where value for learning is best created. 

Would you want to learn to teach experiential testing exercises? 

I envision a series of sessions where we would first experience the exercise as participants, but then turn the roles around first into looking at what facilitating such exercise means and then practicing facilitation while I support, watch and give feedback after. I haven't done this yet, so we could discover what works together. 

You could learn to teach different testing experiences with different applications. I use:

  • a Textbox
  • E-Primer
  • Weather App
  • Gilded Rose
  • Zippopotamus
  • Dark Function Editor
  • Freemind
  • Xmind
  • Conduit
  • ApprovalTests
  • RobotFramework
I also have exercises on understanding your tester personality, agile incremental test planning, test strategy, test retrospective for release and feature, business value and many many more I would be happy to pass on. 

Interested? Let me know. You might also tell what you'd like to start from, because all the exercises I have created since 2001 when I started teaching on the side would fill a few years to go through on the side of a regular job. For prioritizing my time, I ask you to consider my goal - scale. Could you help me with that? If your answer is yes, I'm going to trust you and dedicate some time to help you learn this. Send me an email: maaret(at)iki.fi.

In case it isn't already clear, I am not looking to invoice anyone to teach them. I will volunteer my time for free within constraints of what I can make available. I want more people in the world to experience experiential learning and for myself to make an impact.