Saturday, January 18, 2020

Say It Out Loud - it's Testing


Sitting in front of their computer, with a focused expression on their face, the tester is testing a new feature. Armed with their notes from all the whiteboard sessions, from design review and passing by comments of what we're changing and whatever requirements documentation they have, they've built their own list of functions they are about to verify that exist and work as expected.

"Error handling" says one of the lines in the functions list. Of course, every feature we implement should have error handling. Into the user interface fields where a sum of money is expected, they type away things that aren't numbers and make no sense as money. With typing of "hundred" being ok'd and just saved away to be reviewed later, it is obvious that whatever calculations we were planning to do later to add things up will not work, and armed with their trusty Jira bug reporting tool, they breathe in and out to create an objective step by step bug report explaining that the absence of error reporting is indeed a bug.

Minutes later, the developer sharing the same room just pings back saying the first version did not yet have error handling implemented. The tester breathes some more.

---
The thing is, errors of omitting complete features are very common finds for us testers. Having found some thousands of them over my tester career, I'm also imagining I see a pattern. The reactions to errors of omitting complete features very often indicate that this did not come as a surprise to the developer. They were giving you a chance of seeing something they build incrementally but weren't guessing *you* would start your testing from where they would go last in their development.

A Better Way

When you build your lists of functions you will verify, how about sharing your lists with the developer. Having a discussion of what of these they expect to see work would save you a lot of mental energy and allow you to direct it to their claims, going deeper than just the function. With that list, you would most likely be learning with them that "Error handling" for this feature won't yet be in the Wednesday's builds, because they planned on working on it only from Friday on.

You could also ask in a way that makes them jump into showing you where the function is in code. Even if you did not understand code, you understand sizes of things. Seeing something is conceptually one block of code, another one is sprinkled around when they show it, and something is very big and makes you want to fall asleep just looking at it are all giving you hints on how you would explore in relation to what your developer just showed you.

If you read code, go find some of that stuff yourself. But still, drag your developer into the discussion as soon as you suspect something might not be there.

Code Reviews vs. Testing

When organizations review code through for example pull requests, errors of function omission are hard ones to spot without someone triggering this particular perspective. If you have a list of things that you expect to see implemented and one of them is missing, there is no way that functionality could end up working in testing.

Sometimes, when you have a hunch of something that was discussed in the design meetings being forgotten from the implementation, the way to go about figuring it isn't to install and test - it is to ask about the feature. Say your idea out loud, see a developer go check it, and learn that something not implemented has no chance of working.

Jumping always to testing isn't the only tool you have as a tester, even if you didn't write (or read) code.

Sunday, January 5, 2020

Hundreds of hours Mob Programming over Four Years - Is it Still Worth It?

With four years mob programming (and testing) with various groups, I feel it is time to reflect a little.
  • I get to work with temporary mobs
  • I often teach (and enable learning) in a mob
As people have spent majority of their careers learning to work well apart, supported by other people, learning to work well together is something I cannot expect as a new mob comes together. 

Mob programming is a powerful learning tool. It has helped me learn about team dynamics and enabled addressing patterns that people keep quiet and hidden. It has helped me learn how people test, how different skillsets and approaches interact, and come to appreciation that it is totally unacceptable way of learning and working for some people, uncomfortable for others while some people just love it. Most people accept it for the two days we spend together, but would opt out should the mechanism find its way to their offices. 

One thing remains through the four years - people are curious on how could five people doing the work of one be productive. What does it really mean if we say that working together, at the same time, we get the best out of everyone into the work we're doing? 

Contributing and Learning

There's two outputs of value for us working, individually or in a group. We can be contributing, taking the work forward. Or we can be learning, improving ourselves and being better solving work problems better later on. 

Contributing enables our business to distribute copies of the software we are creating, and and in short and medium term, it scales nicely in value if we manage to avoid pitfalls of technical debt dragging us down, or building the wrong things no one cares to scale. There's a lot of value in not just delivering the maximum contribution over a longer period of time, but being able to turn an idea into software in use fast. Delivering when the need is recognized rather than a year later, and distributing copies of value in scale for an extra year turns into money for the company. We're ok paying a little more in effort as long as the we receive more in the timeline of it paying itself back. 

Learning enables the people to be better at doing the work. And as the work is creative problem solving, there's a lot of value seeing how the others do things in action to help us learn. Over time, learning is powerful. 
If my efforts in learning allow me to become 1% better every single day of the year, I am 37.8 times the version of myself in a year. That allows for a significant use of time today, to continuously keep things improving for the future me. 

There's a lot of value in contributing effectively to having the best work from us all in the work we are doing. Removing mistakes when they are being made. Caring for quality as we're building things. Avoiding technical debt. Avoiding bottlenecks of knowledge so that support can happen even when most of the team is on a vacation and just the little me is left to hold the fort. 

Mob Programming helps with this. And it helps a lot. 

Question Queue Time

Have you ever been working away with something, and then realize you need a clarification. It's that busiest person in your team who at least knows it, but since they are the busiest, you will take the minute to type in your problem to a team chat hoping someone answers. Sometimes you need to wait for that one busiest person. With a culture like ours, responding to others pulling information is considered a priority, and others stop doing what they were doing to get you to an answer that is not immediate.

If getting that answer takes you 10 minutes, it takes 10 minutes for someone else too. With that chat channel, probably it takes 10 minutes for a lot of people to think about, including the ones curious on the answer they too find themselves not having (but also not needing right now). 

If they put that question into an email, the wait time is more like a day. And the work waits, even if other work may happen. 

If your mob includes people with the answers, getting to a place where you have no question queue time could be possible. 

It's not just the questions through. It is any form of waiting. Slow builds to get to test. Finding a set of tools when you need it. Getting started for a new microservice. Discussing rather than trying. 

When the whole mob waits, the wasted time is manyfold. This seems to drive groups to innovate on the ways they work, and take time to do work that removes the wait time in places where individuals suffer in silence, mobs take action. 

If a mob works with something boring, they often end up automating it. If a mob works with something an individual alone has hard time solving, they get it done. And usually they get it done trying out multiple approaches, deciding through action what way they do things. 

What I find though that even in mobs, we don't have all the answers. For my work, the over reliance on waiting for an answer we already had confirmed by a product owner lead us to no product owner - just to emphasize that we don't need to wait for an answer as waiting costs just as much as potentially making a mistake. Removal of product owner revealed that there are answers not available from a person, but ones that require a discovery process. 

Growing Your Wizard

Mobbing is a challenge, but also rewarding. It has amplified my learning. Knowing it exists and not being able to use it, I look at the juniors who learned a lot but not as much as they could if we were mob programming as a wasted opportunity to optimize for the convenience of our seniors. 

We need to help our different team members level up to find their unique superpowers. We need to grow our wizards, and not just expect them to somehow either get through or give up trying. And answering their questions when they don't know what they don't know is just not enough.

In four years, I haven't ended up with a team that would try mobbing for a significant portion. The year of once every two weeks in my previous place of work is the furthest I got with a single team. But I have spent hundreds of hours mobbing, and even if I have more to try, I have learned a lot on how to get started



Saturday, January 4, 2020

Tester Superpowers

In August 2019, on a lovely day after the Conference for Association for Software Testing (CAST), a small group of people got together for a day to discuss Exploratory Testing. As second in the series of Exploratory Testing Workshops, I today remembered the piece that excited me most to learn in this one. I learned that testers have insightful and unique ways to describe what they do at work that seems to surprise people and make them unique. We called them Superpowers, and I collected what I could in tweets, recognizing commonality to what I find myself doing.
Synthesis is about information collection, pattern creation and use of information in ways that are surprising. Testers, with their cognitive focus on digesting and sharing information, become knowledgeable on the products and decisions. It is not the same as having good memory, but a very selective memory to collect pieces that turn useful later.
Holding space is a superpower I dedicated a talk into at TestBash NL some years back, coming to the realization that sometimes quality and testing happens by just having me in the room. The holding space for people may be slightly different than holding space for themes. The idea that we don't only focus on the negative but build people (because people build quality) as their colleagues is a powerful one.
Listening sounds easier than it is. Hearing beyond words, getting to what people mean and how that connects in time with other things people say is an information intake method crucial for action. We don't listen to respond, we listen to learn. Much of what we listen to requires later processing for the learning to emerge - the connections are both in the moment but also over time.
Structuring is seeing patterns and not only keeping the learning about patterns to yourself, but digesting it for others. Reporting in testing is based on finding ways of explaining things that are complex, but still explaining them in an actionable way is possible.

With a small group, I wrote down only a few - and not my own one. Months later, I can't remember what I said in the round of describing our superpowers, or if I was using my to scribe things for further analysis. What's your superpower?

Friday, January 3, 2020

More Words is Better Than Less

Recently, I've found myself teaming up on Agile Alliance initiatives Seb Rose is facilitating.

Agile Alliance is not big in popularity in the speaker fairness circles. They would seem to make quite a lot of money from the big Agile conference in the US, and only pay hotel not travel or honorarium to their speakers. They're established. They draw thousands of people. Feels unfair. Add the "agile" with no support for pair presenting where the second presenter needs to even buy a ticket for the event to come speak, it's fair to state there's unhappiness around these choices.

On the other side, Agile Alliance has helped new conferences get started (remembering them fondly for support in starting European Testing Conference on its 1st year), support a lot of local chapters and meetups, and probably are of a size that needs paid stuff to run in the first place. There isn't that many other sources of financing, so they might need to make some (even if on the expense of speakers) from their big conference.

All of the financials are speculation. I have absolutely no visibility on where they make money and where the money goes, other than the few friends I know rightfully benefit from their support making the world a better place and that alone earns a little bit of my respect.

Around end of the year 2019, Seb Rose shared the news of two initiates he was preparing for Agile Alliance around changing the face of speakers in Agile 202x conferences.

The first initiative was a one time experiment of handing out a lump sum of money to pay speaker expenses under a diversity flag for Agile 2020 conference. This 25k could enable new and seasoned voices that were unable/unwilling to make their voices available without the travel compensation to add to diversity (in the large definition of it).

The second initiative that was very clearly a match for what we do with TechVoices was on mentoring new voices, with the idea that this initiative would be a continuous one for multiple conferences. Without skipping ahead beyond Agile 2020, as experimenting with approaches seemed like a smart thing to do.

As I have my perceptions of what these programs are and I am not writing in Agile Alliance official channels, I thought I'd use more words to explain what these are and why I believe they are a good thing.

The Diversity Initiative

A few days ago, I saw this launch and today I tweeted in support of this initiative:
I love the step. I don't love how the invitation reads, and have provided ideas on how to improve it. While it has not been improved, imagine it saying something like this.

Agile Alliance allocated a lump sum on 25 000 dollars for diversity initiative led by Seb Rose. Seb is lovely and really cares for this stuff. The initiative is an experiment to figure out the reality of what the Agile 2020 conference could be getting if they were paying the speaker's travel (and other participation preventive) expenses against receipts.

This initiative is to seek those voices that really are unavailable for this conference as per financial constraints. Getting listed gives us a feel of scale of the problem and a mechanism to help some portion of these.

There are still two parts: get listed for financial limitations diversity initiative AND submit your proposal. Give us a chance of considering your great content part of the Agile 2020 program. Here is where you can join on making your appearance financially conditional: https://www.agilealliance.org/agile2020/agile-2020-speaker-diversity-initiative 

The lump sum is limited, and the impact we'd love to make with this money is changing the face of speakers in Agile 2020, even if just a little bit. There are people who quietly (or loudly) can't join a Pay to Speak conference and the reasons for this are many fold. A working  theory is that people in particular groups might be hitting financial constraints, making their voices unavailable in proposals where acceptance would have a financial implication.

It is clear 25 000 dollars will not be sufficient for all Agile 2020 speakers (I believe there is over hundred of them) and organizing all of this dependency to finances needs to somehow fit together with multitrack multichair Agile 2020 Call for Proposals process. The chairs cannot deal with distributing responsibility of this initiative on top of what they already do. So the proposals need to come in normally and we need to experiment with this on the side.

Registering for this initiative tweaks the usual proposal as little as possible. If you registered with this program, you are saying it is ok to bundle together your financing decision according to diversity prioritization and decision on your paper's acceptance. Your paper could be great and acceptable, but if finances are unacceptable, that totals in hoping for another time when finances can be sorted. Your withdrawal is part of the process and there is no blame assigned to you for having to say no. Your availability is strongly conditional on the finances.

Even if the form asks for your sad story, feel free to skip it. Focus on explaining what the conference diversity is missing out with your person being absent. And particularly, focus on making the conference call for proposals a proposal they feel bad at losing for making speakers pay so that we have better chances of changing this in the future years.

The Mentoring Initiative

The mentoring initiative targets 1st time speakers. There are so many great sessions we don't get to fully consider, because while new speakers can make great sessions, they also greatly benefit from help in making their idea shine by focusing it, ensuring its specialty and usefulness, and just giving some ideas of improvement. Mentoring is great for this, and the conference normal format includes the track chairs and volunteers giving feedback on submissions added early on into the submission system.

The mentoring initiative adds a little extra support. We are currently collecting a group of mentors, who volunteer to spend 15-minutes in collaboration calls helping find the core of the speakers idea. https://www.agilealliance.org/agile2020/first-time-speaker-mentoring-initiative/ 

Next week we open the calls for people who want to try this extra support for getting their proposal ready. A personal touch with someone who has done it before can do wonders and at worst, you'll have a lovely 15-minute discussion about your idea with someone who wants to see you succeed with it.

Out of this we get a quick view of what is out there, and get you started with writing the proposal into the call for proposals system. The mentor you spoke with online can jump in to help you get what you said in the call in the writing you submit as they will know more of what you're trying to say than someone who did not spend the 15 minutes with you.

Give us a change of hearing your ideas. We can't change the teaching the sessions do if the sessions repeat the same people's experiences. And we have a whole agile journey ahead of us where different experiences are crucial for us to get the hang of what others can teach us.

---
See, I use more words. I don't need to try to say things in a nutshell. I believe there are people who need to read more words to feel welcome to what we are trying to do here. Our intentions are good, and we are listening.