Friday, December 30, 2016

Privilege of location

In the last two years, the world has become smaller for me.

I've been in a relationship involving a lot of Skype discussions and grown to appreciate how things that could have been financially impossible in time before free calls over internet are now something I can rely on. I've fallen asleep with Skype on, and woken up with Skype on - before it turned into more unreliable and started disconnecting.

I've actively learned to initiate discussions with other professionals over Skype and learned to pair test and program over join.me or Screenhero. I've met every person who submits to the conference I'm organizing as if we met in person, over Skype. And when solving a problem at work that involves remoteness, I'm now one that suggests moving from text to voice because I know it is possible from experience.

When I travel, I just decide where I will go to. My Finnish passports lets me in everywhere and most places without any additional paperwork with regards to Visa applications.

Recently, I've grown to see what huge privilege this is. And how, while the world is small for us all with technology, the physical presence aspects are not quite as straightforward.

For European Testing Conference 2016, I invited Jyothi Rangaiah from India to speak in Romania. I was totally unaware of what it would mean to me as a conference organizer. She needed to apply for Visa. For that she needed an invitation from the conference organization registered in that country, tickets purchased and hotel reserved under her name. So we booked her hotel separately from the others. She bought the tickets. We wrote an invitation letter. With all these papers, she would need to travel to another city to apply for the visa. And it all failed with the fact that the conference organization is registered non-profit in Finland, not in Romania. We naturally covered the costs, lost the work and did not have Jyothi present.

For European Testing Conference 2017, we decided to try again. And I'm looking forward to being able to introduce Jyothi's talk to our community.

As conference organizer, I've put numerous (extra) hours into being able to have her here. I expect there's still calls or even visits to embassy before this is all done. I respect her persistence to go over more obstacles that I ever before realized there was.

But this also makes me think. While I believe in inclusiveness, how much better the Indian speakers have to be to be included, for the extra effort they require? How much higher my expectations are going to be for someone who is both financially and effort-wise more costly. Out of the vast numbers of new speakers, why would I choose to invest into someone coming from a "complicated" location.

For the the answer is simple. I will do it because I believe in change.

But stop telling me this:
For the vast majority of people, they never get to consider speaking at a conference. The fact that we don't know is called Privilege. It is not our problem.

Hat tip to all people who make it as international conference speakers with less privilege than what I have. And as white (not rich) female, I have a lot of privilege, even if white (rich) males tend to have even more.

Thursday, December 29, 2016

The Special Kinds of Testing that Aren't that Special

My tweet stream today inspired me to write a bit about why words don't matter quite as much as we seem to think they do. It wasn't just one tweet that inspired me, but several. But if I would name one, this would be it.
 I wrote myself a little post-it note saying:

Isn't all testing
   ... model-based
   ... risk-based
   ... exploratory?

Having done model-based testing, it distinctively describes (to me) testing that is based on explicit, programmable models.

Having done risk-based testing, it distinctively describes (to me) a way of testing where we identify things that could go wrong, their relevance and test with a focus to that information (as opposed to requirements-based testing).

Having done exploratory testing, it distinctively describes (to me) an approach to testing where thinking and learning is in the core over repeating maneuvers.

We combine existing words to find ways of communicating ideas. Usually we need more words put together than two. The order and number of words depends on the person we're communicating with, not just on the person delivering a message.

In words, more is better. These special kinds of testing are special, and they are not. The labels won't alone help you see what is included.  

The Model Competition

There's this thing that keeps happening to me, and it makes me feel like I'm not good enough. Great people design a feature without me. They do the best job discussing it, write something down of it and later on, I join in. I look at what was written down, and talk the the people about the discussions that took place before me. And I just can't make sense of it to the depth I want. So I hide in a corner for a day, do my own search and design, seeking for motivations of why would anyone want to use this and come out with a private version of document, rewritten that has little resemblance to what was there in the first place.

It has happened so many times, that I've gotten over the first idea of me somehow being better in seeing things others don't. And I see others doing it to me, if they just have the approach of daring to take a day of reflection time.

The version I start with is not the feature, it's a model of the feature depicting parts considered relevant in the original discussion. The process of discussing was relevant, and I missed that. And the process of discussing would have been fueled by its participants reflecting on whatever external information they deem useful.

What I do is create another model. The value of my model is to give me time to build my understanding, but since I have more to build on than those who came before me, there tends to be aspects of my model that are missing and deemed relevant.

No model is perfect. The idea of constantly improving and looking for new perspectives, models, to replace or enhance the old is needed.

This time, my model won over the older one, and got promoted as the common piece around which we communicate. And I just wish someone like me comes along, looks at it and does not accept it at face value. But it does not happen to me by just showing up, but there significant work for creating a model of perspectives we may be missing now.

Wednesday, December 28, 2016

The Stress of Artificial Deadlines

"But we promised", he exclaims. There's a discussion going on about how most people are not proper professionals, and how little they care, and I'm inclined to disagree. The promises are thrown into the discussion to show one cares and others don't. "I've worked hundreds of extra hours because I care about the promises" concludes the discussion. The privilege of being able to find those extra hours is one thing, and the personal stress the extra hours have given just may not be worth it.

For the last four years, I've worked hard to get rid of promises because making those promises has, in my experience, more side effects that positive impact.

I'm one of the people who in face of a deadline don't perform better. I get paralyzed. Knowing I have two hours to work on something that could be done in two hours but also could take four, I tend to not start. With free space ahead of me, I work productively and complete tasks. Thinking about how long is crippling to me. So many people assume I will answer those questions, so over the years I've learned ways of managing my contributions in ways that suit me.

I've worked with a lot of people who in face of a deadline use significant time on making excuses or covering their backs. "You did not tell me of this requirement", "I was sick for three days", "the dependencies were more complicated that I thought". All of these are true, but the more individuals commit to deliveries, the more they also commit to making sure they're safe.

I work to change my world to daily (or more frequent) releases, so that we can plan (and get surprised) with an item at a time. I work from the premise of trusting that we'd rather do good things and make an impact in the world we're contributing to. I don't want to be one of the people who on a personal level carries the responsibility of all surprises just magically adding hours to work days, I would rather be one of the people who delivers value all the way to production fast trusting there's always another day to do more.

Two week sprints, sprint commitments under uncertainty and all the clarification ceremonies and blame assignment when commitments fail seem like such a waste of smart people's potential.

Some of us quietly invest in the ceremonies others want. Others work to change the ceremonies to focus more on value.

I'm so happy that the big deadline up front has moved to a small deadline close by. It's better for the company but it's definitely better for me. The stress of artificial deadlines can be all consuming.

Tuesday, December 27, 2016

You may not even know how much you talk

"We should try being quiet to hear better what the other team members have to say", I suggested to the in-room product owner. He was game as long as it wasn't just him who needed to temper down his speaking but I would have to do that too.

And so we agreed. We would work on actively listening, and actively not saying what should be done. We would make room for others.

This was a practice I've suggested before. Working out ways of moving myself from the active speaker in meetings -role into speaking in the background, encouraging those with less volume to say what they want to say, and to be allowed to make their own (inexpensive) mistakes. Practicing not taking the last word (or competing for it), not having to contribute even if I thought I knew better and being quiet. And it isn't easy.

I'm perceived as the one who always speaks up. Sometimes I speak up of things others are afraid of saying. Sometimes I say again things others are saying but not being heard. And sometimes I just have things to say myself.

I speak enough to remember negative remarks about this habit. The colleague a decade ago who told me in moment of frustration that only those who don't know how to do things talk this much. The moments where I realize I'm filling in other's people's sentences or interrupting thinking I know what they will say anyway.  I don't recall people complaining much on it, but it often hits my self-improvement analysis filters of things I would like to improve on: patience and the timing of my responses.

Looking into gender research, I came across the idea of a listener bias: the idea that we are socialized to think women talk more than they actually do and feel like they are dominating discussions easier than men. It lead me into a continued path of investigating the real dynamics. But as a first step, it inspired me to bring 'changing the speaking power dynamic' as the single rule for a women in agile summit at Agile Testing Days.

In my group of seven women and one man, getting excited over the topics we were discussing ended up with the single man taking more than a third of speaking time. I asked later if he realized what had happened, and he did not.

In another group, I asked a man in the group how the discussions went. He told me of pains of not contributing as much as he had as he had been an expert in the topics, but decided to play by the rules given and speak less to give room for the women's voices.

With the experiment of quietness I share with my product owner, I will also take another one. I just installed a gender timer to pay attention to how much we actually speak. My time to move from feedback and perceptions to a period of measurement. That should be interesting. 

Saturday, December 24, 2016

Test Automation Leadership

I love being "just a tester". I've recently had great success in my company expressing what I do to non-development people explaining I'm "just a tester" but they know how no one is really just anything. It gives me a quick way of getting to breaking their perceptions of what "a tester" would do, and immediately opening their perception to the ideas that I will be more just like anyone else. I don't need to explain all the details to make the connection for us to share on work.

I don't act like just a tester to most people's stereotypes. And I hope I break people's stereotypes because I still am a tester, even if no one is ever just anything.

I think of myself as caring and self-organized. I try to think what is the best for those I've committed to (my company being one) and I don't expect others to come and tell me my tasks. I welcome suggestions of tasks, I love being asked to participate in providing feedback and I actively frame tasks to drive forward business value while learning about business value.

So last night when I listened to great podcast interview on Hanselminutes by Angie Jones and Angie talking about "Leading that effort" as automation engineer, I finally gave in to realize there's a wordplay I've been doing. I say I'm not a leader. Yet I lead all sorts of improvement efforts. I lead because people choose to follow. Speaking up about things that others listen to seems to be leadership. Starting examples of work others can join to seems to be leadership. Making sense of our goals and approaches to get to them and sharing observations seems to be leadership.

People who are followed are leaders. Which made me think of a short video everyone should watch.

I can easily see Angie leading efforts in test automation as a skilled test automation engineer and a very outspoken personality. I can't see some of my colleagues leading efforts in test automation, because their focus is on the details of automation code, not on vision of where this would go and how it could be better.

We need leaders and we need experts. Sometimes those are the same people. Other times not. And I would suggest that you can't define leaders based on their work titles but more through their personal characteristics.

Confessing I might be a leader after all brings another perspective to me. It still does not mean I should be "held to a higher standard". It still does not provide an open checkbook to my time on debates on anyone else's terms. I'm around to learn - with others.


Throw one away to learn

I sit in front of a computer, testing Alerts. It's a feature where something happens on the client side that triggers a dialog for the user but it's not enough that the dialog is presented to the user, there's also an admin somewhere supporting the user that needs to know and understand what went on without showing up behind the users back just to understand. I keep triggering the same thing and as I move back and forth between my points of observation, I make notes of things I can change. I test the same thing to learn around it, to keep all my mental energies focused on learning.

Around the same time, my test automation engineer colleague creates a script that does pretty much the same thing without human interference. Afterwards she has only this one script, I have 200 tests and I call her test shallow offending her. It's a good start, but there's more to testing of the feature than the basic flow.

The main purpose of the testing I do (exploratory testing) is learning. The main purpose my test automation engineer colleague does is long-term execution of checks on previous learning. I believe that while keeping track of previous learning is relevant, the new learning is more relevant. Bugs "regression" does not just happen. It happens for a reason and I love the idea of building models that make my developers think I have magical superpowers finding problems they (or I) couldn't imagine before.

Last night, I listened to a great podcast interview on Hanselminutes by Angie Jones,  I got triggered with Scott Hanselman repeating the idea that you should automate everything you do twice. With that mindset, me repeating same thing hundred times to feel bored enough to pinpoint the little differences I want to pay attention to would never get done. Angie saying "we can automate that" and facing resistance from people like me has so many more layers than the immediate fear of being replaced. It has taken me a lot of time to learn that you can automate that and I can still do it manually for a very different purpose, even with people around me saying I don't need to do it manually when there is automation around.

My rule is: automate when you are no longer learning. Or learn while automating and learn more whatever frame you need for it. My biggest concern on test automation engineers is their focus on learning - about the automation toolset over the product, and the gap in quality information that can create if not balanced.

This morning, I continued by listening to Brian Marick on useful experimentation with with two programming examples. He talked about Mikado method where you throw away code while refactoring with tests, with the idea of learning by doing before doing the final version. And he talked about Corey's technique of rewriting code for user stories of 3-days size until a point the code writing fits, through learning from previous rounds, into one day of coding. Both methods emphasize learning while coding and making sure the learning that happens ends up in the end result that remains.

I can immediately see that this style of coding has a strong parallel to exploratory testing. In exploratory testing, you throw your tests away only to do them again, to use the learning you had acquired through doing the same test before, except that it is not the same when the learning comes to play. Exploratory testing is founded on deliberate learning, planning to throw one (or more) away while we learn. So before suggesting we should "automate that", consider if the thing you think of automating is the thing I think I'm doing.

I have two framing suggestions. For the people who propose "we can automate that" - make it visible that you understand that you can automate some part of what we are doing, show appreciation for the rest of it too. For the people who reach negatively to "we can automate that", consider welcoming the automation and still doing the same thing you were about to do. For the people who actually think they are doing the same thing by hand without understanding what they are learning in the manual process: learn more about learning while you're testing. The keyword for that is exploratory testing.

Thursday, December 22, 2016

The Binary of Coding in Testing

I've just spent a week deep in code, and yet yesterday in a podcast bio, I introduced myself as a non-programming programmer. To start today, I'm reading a blog post framing "I am not a developer. I have a background in Computer Science, I know a thing or two about writing code, but I am definitely not a developer."

People care what they are called and I care to be called a tester and I've only recently tried fitting on the additional label of "programmer".

Yesterday, I got called "my manual tester" by the Product Owner with an instant annoyance reaction to both words "my" (he owns the product's priorities, not me or my priorities in a self-organized team) and "manual" (exploring with focus on test automation and thinking smarter than those who just focus on regression test automation, is that manual?).

There's this idea around with testers that there is somehow in coding "those who do it well". It hit a particularly sensitive spot seeing it yesterday after a week of reading and linting generally not the best code and thinking back to my experiences with mob programming that made me realize that there are not much of people who inherently all by themselves do it well. Some, over time, develop better habits and patterns, but even they have more to learn - and usually, the best of them realize learning is never done. There are not people who "do it well". There are people who "do it and learn to do it better". Other than the fact that there's a large group of people who don't do code at all, the rest of it is a continuum, not a binary in being good. Thinking of programming as something you either have of don't is not a good way to approach this. I rather frame it as a friend helped me frame it: everyone who has written Hello World is a programmer.
I usually rather leave programming for people who enjoy it more than I do. That's why I'm a non-programming programmer. But there's aspects of code that I enjoy tremendously. Creating it together to solve problems. Reading it to figure out if there's better ways. Comparing it to expectations that can be codified (lint) and patterns of businesses and real-world concepts. Understanding what is there (and could work) and what isn't there (so can't work). Making it cleaner when it was already made run in the first place. Extending it within the limits of what I'm just slightly uncomfortable with. And that is already quite a lot.

I loved how Anna Baik framed this.
I find that the binary thinking keeps people with potential out, and managers who would want to see more code in  testing overemphasizing code setting foolish targets that make good testers bad programmers. Getting rid of the binary, we would start talking about how the good testers could be good coders too, without giving up the good tester aspects they have.

It's really about learning.  This morning, I complimented my daughter's Play-Doh creation as it had significantly improved from first version she showed last night. Her words back to me serve as great reminder of the thing I so would love to remember every day at work:
"All it takes is practice and I will keep practicing". 

Tuesday, December 20, 2016

Respecting testing not testers

There's a recurring discussion I'm having with pretty much these points:
  • People who joined us as "testing specialists" are no longer testing specialists but specialists of everything software. 
  • We value good testing and good people with good testing skills, but we see no need of having testers or testing specialists. 
It's the same discussion again and again. I identify as a tester. I'm told by some of my peers I don't qualify because I am also able to do other things (and will, for the good of my company - knowing that a lot of times my special skills is the best I can do for my company as others don't have same deep skills). I'm told by non-testers that I can do testing, but not identify as tester.

This is a recurring discussion that makes me sad, and often makes me feel like I want to quit the IT industry.

This "we're all developers", "we don't let people identify as testers" discussion is a lot like the discussions on gender. Be it "guys" or "men", the women are supposed to feel included because "naturally these words include all people". 

I want to be a tester. I want to be respected as a tester. Any references to "monkey-testers but we know you are not one" are just as offensive as saying that I'm one of the men, "not like other women". 
 
It's not enough to see that testing is important. You also would need to see that testers is a description of specialty and a real profession. Stop pushing people out with wordplay, and just help them grow within (and stretching) their interests. We need good people to create good software, being inclusive would be a better route. 

Thursday, December 15, 2016

Mixed messages - learning to live with it

I was working with a brilliant tester, and she looked puzzled. "I don't really understand what you want. Can't you just make up your mind?", she exclaimed. She added details: "Last week, you wanted me to pay attention to not generating too many bug reports the developers would reject. This week, you want me to address bugs with developers that they wouldn't consider bugs with their current understanding."

I could see she was frustrated and hoping for a clear guidance. But I did not have it any more than she did. But I started to see her point. She was looking for a recipe to excel in the work she was doing. I did not believe there was a recipe you could follow, but there were all sorts of triggers to consider for trying out something different - experiment intentionally and see how the dynamics changed.



What we could get out of the discussion about what we were discussing, she could only add one piece to her understanding: things would never be completely clear, they would be changing as either one of us would learn and she could just do whatever she believed was right. It was her work, her choice.

As software professionals (and testers in particular), we get mixed messages and conflicting requirements. We work around them the best we can. Outsourcing clarify from a "manager" is one of the worst practices I can imagine for someone doing thinking work.

Take in conflicting information. And make your own choices. And remember: it's ok that not all your choices are right. Unlearning is sometimes more important than learning something in the first place.

"It depends on a context" is sometimes a comfort word we result in, when we feel there's a rationale that we can't express clearly. Right now I prefer to think of my lack of knowledge and experience in so many things that future yet holds as part of this context. We do things we believe are right and responsible. And we own both our successes and failures.

Wednesday, December 14, 2016

All people are decision makers

I was testing a new implementation to an old feature, and as a new person, figuring out what of the old is intentional and what is new. On many of the questions I would ask I got told "it has always been this way, and it's ok". I chose not to fight - I made an active decision not to care.

The reason I could decide not to care is that I know there will be more chances of addressing that particular detail. I knew that soon, I will rally together a group of "real users" to learn with on if things I think are relevant are indeed relevant. And changing it then and changing it now are really not that big of a difference. The users seeing a problem (and feeling they got heard) may just be more valuable than users never seeing that problem.

I make decisions all the time. I decide on what I test and what I don't test. I decide on what I learn in detail, and where gut feeling or plain ignorance is sufficient. I decide what information to fight for and when. I decide how hard I will fight.

There's an expression in the testing community that really ticks me off but I most often just try to dismiss it:
Testers are not decision-makers but information providers.
All people are decision-makers. But very few of us make big decisions alone. The big decisions (like a release after a two-year project) depend on the small decisions prior done well enough.

I'm a tester and I regularly and habitually make decisions on releases. Those decisions tends to be small in scale, because since agile, the world has changed. Should the testing community reassess more of the shared learnings from time before agile?

 

Tuesday, December 13, 2016

Pair em up while they learn!

"But you can't do that", he said. "It's going to be boring", he exclaimed. "You have to give them all their own computers or they won't learn a thing".

With 30 kids in the room, paired up in front of one computer for each pair, I had a moment to deal with a concerned parent who had strong feelings about how kids learn - by doing, not by watching.

"I know what I'm doing", I tried telling him. "I've done this before". "We don't just pair them. Just watch. We do a thing called strong-style pairing". He did not look convinced, but I needed to step out and attend to the class anyway.

We introduced the rules I've learned with Llewellyn Falco. "Who is in front of the keyboard now? You're the hands only, no thinking allowed. Who is not in front of the keyboard? You're the boss. Use words to say what you want the other to do".

The buzz, the laughter, the learning commenced. And they loved it, being fully engaged as we kept reminding on the rules: the one not touching the keyboard is the boss, and we change positions every four minutes.

The concerned parent seemed happy too. He followed through the session, and I saw him smile too. I suspect that was a meeting with a developer who has either never pair programmed or in particular, never strong-style pair programmed.

This event happened at my work, and the kids were my colleagues kids. The concerned father is not an exception. Adults are concerned when they know how to code on how their kids will feel while coding, because they care.

I've sat through my share of intro events to programming with my daughter. I find that a lot of sessions pair kids (esp. young kids) with their parents, and while they then have a personal tutor, in my experience they learn less.

So I work with Llewellyn's rules: pair the kids up. Parents are welcome to stay and program, but kids pair with kids, adults pair with adults. There's nothing more motivating for kids than realizing they're completing the programming puzzled faster than their parents. After all, collaboration comes more naturally to kids who are not yet worried about "efficiency" - results matter. Us adults should keep that in mind a little more often.

Monday, December 12, 2016

Thinking of Heartbeat

At where I work, every room seems to have an always on monitor for a test automation radiator. You know, one of these screens that, if all is well, turn dark blue after a while to indicate there has not been failures in ages (whatever the "ages" is defined to be). One where any shade of blue is what you seek and if you see yellow or red, you should be concerned.

As I've walked around, the amount of not blue I see has been significant. My favorite passtime, almost, is to see how people respond to someone paying attention, and the reactions vary from momentary shame to rationalized explanations.

One of the teams that has made more of a positive impact with their dark blue radiator got more of my attention, as the bright colors weren't taking the focus. I started reading names of the things and run into something interesting: Heartbeat.

As I saw those words, I had a strong connection to the idea of what that could mean. A small discussion confirmed: heartbeat tests are the set that gives us information on is the system we're testing even alive enough that we could expect any other tests give us any information.

For the things my team tests, to have a heartbeat we depend on the backend systems including a bunch of things running somewhere in Amazon Cloud. If any of the things we depend on fails, we fail. And for granularity, we want to know if the brokenness is for internal reasons (we should learn something) or from external reasons (we might have more limited control). 

With all the teams I see, I only see one that has a heartbeat measured with test automation separately.

One team's heartbeat (a core dependency) is another team's core functionalities. You only need a heartbeat when you can't observe the other is alive from other means - their test status are not reliable or available/understandable to you.

Disconnect of Doing and Results

For most of us, our work is defined with availability, not results. For a monthly salary, we make (the best of) ourselves available for a particular amount of hours every week, and those hours include a good mix of taking things forward for the company and taking oneself forward for the future of taking things forward for the company.

The weekly hours, 37.5 for Finland, remind us of a standard of sustainable pace. If you spend this amount of time thinking focused about work (and my work in particular is thinking), the rest of the time is free. The idea of the weekly hours helps protect those of us with a habit of forgetting to pay attention to what is sustainable, but also define the amount of availability our employers are allowed to expect of us.

In the thinking industry, the hours shouldn't really be the core but the results. Are we using our thinking hours to create something of value? Being available and being busy are not necessarily the actions that provide results.

There's been two instances for me recently when I've paid attention to hours in particular.
  1. Learning about a Competitiveness Pact Agreement adding 24 additional working hours to 2017 (and onwards) 
  2. Setting up my personal level of Results I'm happy with
The first one, as I became aware of it, was more of a motivation discount. It made me painfully aware how much I hate the idea of working on the clock and how much extra stress I get from having to stamp myself in and out of a working place in new work over reporting all hours manually in last one. The new step makes me ashamed of the hours I put in - as they don't stay contained at the office.

The latter is a thing I seem to be continuously discussing. I would, on a very personal level, like to see that my work has an impact. And I monitor that impact. I'm aware that often more hours into trying to create an impact can give me more of an impact.

As I become aware that I just spent a week in a conference (long-term gain, I hope), I become aware that this investment is away from something else.  I could have driven forward something else. And that something might be short-term and long-term gain combined.

There's no easy formula for me to say when I would be happy with the results I get for the hours I put in. I know my hours alone don't do much unless they are properly synced with the hours of others.

I would like to think I know the difference between doing and results. Not all doing ends up becoming valuable. Some things don't produce. Appearance of busyness isn't a goal. Being busy isn't a goal. Getting awesome things done is the goal. Yet I find that with a little more time invested, it is possible to get more awesome things done. My work is thinking, and I'm thinking without fixed hours.

Is there a few more awesome things I could get done before the year is ending?

Wednesday, December 7, 2016

Tester as a Catalyst

Back in my school days, I was a chemistry geek. And if life had not shown me another path through being allergic to too many things, I would probably be a chemical engineer these days.

It seems very befitting that my past team helped me see that a big value I provide for the team could be best described in terms of chemistry - of me being a catalyst in my teams.

Catalysts don't make things happen that wouldn't happen otherwise. Catalysts speed up the reaction. Catalysts remain when the reaction is done with.  Potential for reaction exits, but sometimes you need a catalyst to get to a speed of noticing visible changes.

I love to look at my contribution with that perspective. The things that happen with a tester like me around are things that could happen otherwise but I speed up the reaction with information that is timely and targeted, and delivered in a way that is consumable. For many of the things, without a tester like me, the reaction doesn't start or is so slow it appears not to happen. A catalyst can be of any color, including completely invisible. I would like to think of myself as more colorful addition.

I'm working through my techniques of being a good catalyst in my teams. Here's some that I start with.

Introduce ideas and see if they stick

I have had the chance of hearing (through reading and conferences) to be introduced to many people's ideas of how the world of software could be better. The ideas build up a model in my head, intertwined with my personal experienced of delightful work conditions with great results that I want to generate again and again. The model is a vision into which I include my perceptions of what could make us better.

Out of all the things I know of, I choose something to drive forward. And I accept that changing is slow. I speak to different people and through different people. I work on persistence. And my secret sauce for finding the patience I'm lacking is externalizing my emotions on counting how many times it takes, writing notes for future me about my "predictions" to hide out of sight and venting to people who are outside the direct influence of the change.

I believe that while I can force people into doing things, them volunteering is a bigger win, leaving us working on the change together. I can do this, as I'm "just a tester" - as much as anyone is just anything.

Positive reinforcement

People are used to getting very little feedback, and I find myself often in a position of seeing a lot of things I could give feedback on. And I do, we talk about bugs, risks and mitigation strategies all the time. But another thing I recognize myself doing is positive reinforcement.

I found words for what I do from Jurgen Appelo. He talks about managers shaping organizations with the best behaviors leaders are willing to amplify. I try to remember to note exceptional, concrete contributions. I try to balance towards the good.

And recently, I've worked more and more on visualizing and openly sharing gratitudes with kudo cards. Not telling just the developer that his piece of code positively surprised me on some specific aspect, but also sharing the same with her manager who has less visibility on the day-to-day work than any of us colleagues in teams.

If I want to see more of something, instead of talking about how it is missing in one, I prefer talking  how it is strong in another.

Volunteering while vulnerable

I find myself often in middle of work I have no clue on how to do. I used to hate being dependent on others, until I realized there are things that wouldn't happen without a "conscience" present.

In mobbing, I could see how the developers look at me and clean up the code, or just test one more thing before calling it done.

It's not what I do. It's what gets done when I'm around. And in particular, it's not about what I can do now, it's about what I can do with people now and what I can grow into.

Amplifying people's voices

A lot of people, not just testers, feel they are not heard. Just like my messages get through best with other people, I can return the favor and amplify their voices.

I can talk to the managers about how it feels to a developer with a great idea to be dismissed, until I repeat the message. Through the feelings discussion, I've had managers pay special attention to people with less power in their communication that I seem to find.

We're stronger together, I find. We have great ideas together. It's not a competition. It's a collaboration to find the diverse experiments to give chances to perspectives that make only sense in retrospect. Emergence is the key. Not everything can be rationalized before.

How to write 180 blog posts in a year

Yesterday, on December 6th - the Finnish Independence Day - I learned that I have apparently written 180 blog posts in 2016. The detail alone surprised me, but the context of how I learned this trivia was even more surprising. I was one of the mentions of my many achievements as I was awarded the MIATPP award - Most Influential Agile Testing Professional Person - at Agile Testing Days 2016 in Potsdam, Germany.

I'm still a little bit in a state of disbelief (influential, me?) and feeling very appreciated with the mentions, likes and retweets on Twitter. But most of all, I get stuck on the idea of 180 blog posts - how did that happen?

With the number coming out, I wanted to share the secret to regular blogging with all of you. I learned it from people who talked to me about my blogging and in particular from Llewellyn Falco.

The secret to regular blogging is not planning for it. Not monitoring how much you're blogging. Using all your available energy around blogging into the action instead of the status.

There's a time for doing and a time for reflection. I apply cadences to reflection. There's the daily cadence of showing up to try things. There's the weekly cadence of seeing progress in small scale. There's the monthly and quarterly cadence of trying to see flow of value I generate. And there's the annual and lifelong cadences of appreciating things I've learned and people I've learned with.

I write for myself. I still surprises me that people read what I write. I'm still "just a tester" believing no one is really just anything. I live and breathe learning, and blogging is just one of the many tools to learn. If I write (or give face to face) awful advice, I can learn from it. If I don't give the advice, if I don't share my experience, I'm one more step away from being able to learn what could really work.

Share to learn. Care to learn. And wonderful things will emerge.




Wednesday, November 30, 2016

We love our little boxes

There are days when I feel I shouldn't tweet, because my *intent* is just to make a note of something and someone else takes it as "Interesting, I want to understand this" - and a long discussion emerges. Twitter really isn't a good place for having discussions and personally I at least seem to be confusing people more than clarifying over that medium.

So this is a place for a blog post. Here's what I said:
Let's first talk about the principle. For years, I've been working as a tester and I view my job to include working through information and uncertainty. I'm somehow involved in an activity that makes a discovery of something missing or off, and then we'll do something about it when we know of it. Making lists of things we are missing is core to what I've been doing.

However, reading through lists is a huge time-waster. There's a lean principle of avoiding inventory, and lists of things we should/could do are definitely inventory. Creating and maintaining the lists is a lot of work. And a lot of times with focus on a shorter list in a group that does development is better. Let's deliver this value first and look at what we know the world looks like after that, right? So I've chosen to try to work on value we're delivering now.
 
My usual example of minor changes and tracking is my inability to unsee or dismiss typos in applications. I can go by them, but they leave me backtracking and drain my energy. But there are a lot of these things where, if I look from just *my* perspective as a developer, I could do the "minor" task also 5 minutes before the release. But the trouble is, there may be others who will backtrack all the way until the change is done.  

I see two patterns of how these discussions  with minor changes go (that are really honestly minor).

Case 1: Yes, this. 

There's tool with a local testing script for making sure our schema follows the rules. I change the schema, and I get told to run the local script. Except that the script won't run. Going through the code, I learn there's a parameter I need to use, no clue what it would be. And when I ask about it, I learn that in addition to the missing parameter, I also miss some libraries.

Instead of passing this information to me (and the 50 others who will run into this), the fellow I go to programs relevant error messages to the tooling, so that anyone coming after me gets the info from the trying to run the tool. And all of this happens while I sit with him, with changes being in the version control by the time I walk away.

A day later, I see someone else struggling with the tool. They appreciated the error message right there and then. No backlog. Small thing, do it now. Same amount of time would have gone into just making the backlog item.

Case 2: No, not this. 





There's the schema and it's shared with 10 teams. It's actually only becoming a contract between the teams, so it's work under progress. Reviewing it in a group with representatives, we agree on things that need doing. Like splitting according to our rules. Like removing values that are not in use ("it says DEFisEnabled" and there is no DEF at all yet, maybe in 6 months). Like making sure the names don't confuse us ("it says ABCisEnabled" and "true" means it's disabled). 

So we agree they need to be changed and they keep not changing. Because no one volunteers to change them. No one volunteers because anyone could volunteer (including me). And only one of us (me) will be directly suffering the consequences on a continuous basis with the mental load of remembering what works, what doesn't and what still needs doing.  While we're avoiding doing things that are part of that value we should be taking forward together, the side effects hit other people than the ones avoiding the work.

So...

We all have our ideas of what else we could be doing, and a lot of times that does not come from the perspective of value, but choices of the type of work I personally would like to do. If I'm a C++ developer and we need a Python tool, that is surely a task for someone else? If I'm a senior developer and the change can be done by a junior, that is surely a task for someone else? If I can find someone, anyone, to do it, it isn't my task to do. Because when it isn't mine, I can do more of things I personally would choose to do. Like complex algorithms. Or only testing. Or studying more of something I'm into because I don't have to do *that*. You name it. 

I could frame my original question to be: why so many people volunteer to only work on things they see as "their job", without caring about the overall flow in the system. And all this is a very human thing to do. We like our boxes. We like staying in our boxes. And we love the idea of someone else doing the stuff that doesn't belong into the boxes I like doing.



Saturday, November 26, 2016

In search of value and bottlenecks

Back in the days when agile was new to me, a lot of teams played all sorts of games to learn about the dynamics relevant to software development. Some of the best training courses included a simulation of some sort - maybe they still do.

I played the Marshmallow Challenge with Antti Kirjavainen at Tampere Goes Agile, and thinking of it, I realize it has been a long time since I've been playing any of these agile games.

One game that I remember particularly fondly is the Bottleneck game. I'm thinking of that game today, as I'm thinking about bottlenecks and people's attitudes.

A lot of times talking with programmers, I sense an idea of self-worth in writing code. It's true, there isn't a program we can run (and test) if there isn't the act of writing the code. However, looking at professional programmers in action by sitting next to them, you see that clearly it's not all about writing the code.

In fact, writing the code is hardly the bottleneck. Thinking smart to be able to write the right code is. And for a non-programming programmer like me, thinking together, in right kind of batches, tends to be the contribution I seek to improve.

Without writing the code, the thinking isn't complete. It's still a theory. The trust we experience in testing the system is in the part of the thinking that ended up in the code.

I also find it absolutely fascinating to look at programmers thinking about a line of code together. Having that chance (by inviting myself into that chance) I've learned that for each line we write, we have a lot of options. I remember looking at one of those cases unfold in particular, in a mob format.

We were replacing a component with another. We were not particularly good at talking in intent, all we knew is we need to take out calls to one component and replace then with calls to another. And very early on, someone suggests a way to do a thing. Someone else suggests another option. And another. And very soon there's suggestion of something none would have suggested without hearing the other suggestions. And that feeds into yet another suggestion.

While a lot of times we would go about doing all, we did not because we did not actually propose those as ways of doing. We listed them as possible ways of doing it. From doing the one selected, we still built more on top of the experience.

In a period of just a few minutes so many things happened. And all these things were about thinking, and bringing in various perspectives and experiences into that thinking. It made me realize that each line of code can have a lot of options on what it includes. From a perspective of a tester, those options have implications. Being a non-programming programmer further away from code, I would not see or understand those implications quite the same way.

The amount of tradeoffs in selections in building software is fascinating. The idea of getting it to work is so different from getting it to work just right for all the criteria we can be thinking of: the rightness for this particular technology, the rightness for future maintenance and extendability, the rightness for performance and security considerations.

So all of this leads me to think about the problems I'm experiencing in (some) developer thinking. The problems of "just tell me the requirements", "just specify what it needs to do". The problems of not volunteering to finish the details when the big lines have already been implemented. The problems of not caring about side effects when there's at least some way it already works. The problems of not considering future self and others in maintaining the code. The unwillingness to cross necessary technological borders and rather waiting for someone else to do their bit and solving problems in the interface together, real time. The inability to spend time on testing with an open mind to learn about things that didn't quite work as intended, that we did not even know to expect.

A lot of times the other thinkers, like product owners and testers are around to compensate for the programmer interests. But I still appreciate the programmers who can do all of these types of things the most.

And I find that they usually don't exist, except momentarily. And in programmer groups when they are in their best behavior. 

Not a consultant

I don't have excessive experience on how consultants do their jobs, I've been one only for a very short period of time in my career. The consultants that I see seem to be self-certain people, some very aware of their specialties and limitations, but all taking significant steps to go about helping organizations transform in some way. With the big visible cost number per day, there's an expectation of impact. Consultants being realistic, the impact isn't usually an overnight transformation, but a slow movement of changing perspectives and habits and including skills to stretch the status quo.

I've done short gigs with companies. Most recent one would be one where I first helped assess skills and potential of tester candidates delivering a video of pair testing and report on what to pay attention to in that video and then later training the hire into testing the product, again through strong-style pair testing with me never touching the keyboard. My "consulting" gigs are usually very contained, less open ended than what I see other consultants doing. And all this comes from the fact that I work as "just a tester" in some organization. Now F-Secure, and other product / customer organizations before that. I love the company ownership of the solution and the business development aspects, and I've always felt I get into that all the way immersing myself into the organization and its purpose.

Today I was listening to Sal Freudenberg Lascot talk about Neurodiversity / inclusive collaboration and I was thinking about how much nicer it was to think of aspects of being neurologically different in our needs of how we feel comfortable working than talking of being introvert/extrovert/ambivert. This thinking also helped me outline my favored style of working in organizations that I've been framing in contrast to styles of some people who try to achieve similar results.

I tend to avoid pushing people directly to do stuff. I'm very uncomfortable telling my team to do a retrospective (I mentioned that 20+ times, each time marking one in my bookkeeping to see how many mentions it took to get it done - 23 was the tally). I'd like to tell my automation tester colleagues to go pair with other automation testers because they just write the same code and it makes no sense to me to have personal repositories, but instead of telling, I again mention. Over time, I will show specific examples of problems, hoping people will take up solving those.

The same goes with me wanting to try out mob programming. I go and pair with people who work on exploratory testing in areas I'm learning deeply about right now. I organize practice mobs on areas I work on, but I don't push people to mob, I wait for them to volunteer. I keep the theme out in the open. I point out how the distributed way of working takes us weeks to complete some tasks that could be done in a day together. I share my excitement about doing learning like this with others. But I don't easily go and just book a time when we will do that. I give people time, assessing applicability of my perspectives and slowly moving in the themes of value. And I have an overall goal: I want us to feel happy and included. I want us to feel useful and valuable. I want us to make a difference together through the software we're creating. I want us to be able to say that we get better at what we're doing, every day.

I experiment with everything. The way I mention things and the ways they get caught. The ways I can personally complete a task. With every action, there's a response. And I spend a fair amount of my time finding patterns in those responses. I love the introspection of my own responses. And people around me occasionally hate that I overanalyze their reactions.

I realize I'm a consultant in a way, even if I am an internal consultant, and employee. I'm around to serve my chosen purpose, share my love of testing and great products, and it's ok that the changes I participate in take years and years to accomplish.

Looking back, I'm super proud of what we became at my previous place of work. From monthly releases and loads of bugs in production, we went into daily releases and rare occasions of bugs in production. From individuals avoiding talking to others we went into a well-working remote team that got together twice a week. From me feeling alone with my interests in testing, the change to the team caring for testing and me caring for technical solutions was immense. From people feeling they had no power, we went into a team of strong experts who cared for all activities from value to delivering it. The broke the ideas of working on an assembly line each with our tasks, and contributed according to both our strengths and stretches. We opened every corner of single code ownership and cleaned it up to be team's. We dropped all work estimates, and worked on a limited number of things at a time, delivering things in pairs (or mobs) as quickly as we could. We worked out ways to include work on technical debt, which we understood that came from the fact that we were learning: the things we coded a year ago needed an update, because every one of us was a better version of ourselves now.

When I look at consultants, I wonder if their different style of communicating and more direct driving of change is something I should practice. And I recognize my discomfort. Ir's not that I don't speak out. It's just that my preferred ways of communicating are slow. I like to take my time to see if my proposals will actually improve things. And I see that I regularly dismantle implementations of my great ideas at a time that others did not notice yet that they are not really working.

Not a consultant, yet a consultant. Aren't we all? 

Tuesday, November 22, 2016

New World Problems in Automation

I did a talk yesterday, which I think of being around the idea that in a world where we've found useful and valuable ways of including automation in testing in a relevant scope, what more is there to do on that theme. Surely we're not ready and future (and today) hold many challenges around spreading skills and knowledge and innovating around today's problems.

I keep thinking back to an after conference discussion with someone who I think first my idea of where we can be if we stop fighting against test automation's existence. They had loads and loads of automated unit, component and integration tests. They found their tests valuable and useful while not perfect. But I'm most intrigued with the reminder of how the problems we talk about change when we have automation that isn't just wasting our time and effort.

The problem we talked about was that it takes too long to run the tests. What to do?

I want to first take a moment to appreciate how different a problem this is from the idea of not knowing how to create useful test automation in the first place. Surely, it sounded like this organization had many things playing for them: a product that is an API for other developers to use; smart and caring developers and testers working together; lots of data to dig into with your questions about which tests have been useful.

So we talked about the problem. 30 minutes is a long time to wait. They had already parallelized their test execution. But there was just much of tests.

We talked about experiments to drop some of the tests. The thoughtful reading to remove overlaps taking a lot of effort. Tagging tests into different groups to be able to run subsets. Creating random subsets and dropping them from schedules to see the impact of dropping, like having different tests to run for each weekday so that all tests end up run only once a week.

We talked about how we don't really know what will fail in the future. How end-user core scenarios might be a good thing to keep in  mind, but how those might stay in the mind of the developers changing code without being in the automation. And how there just does not seem to be one right answer.

I have some work to do to get to these new world problems. And I'm so happy to see that some amazing, smart people who also understand the value of exploration in the overall palette are there already. Maybe the next real step for these people is machine learning on the changes. I look forward to seeing to people taking attempts in that direction. 

Sunday, November 20, 2016

Remind me again, why do I speak at conferences?

It was 1997 and all I wanted was to be one of the cool kids in the student union board of executives. I announced my interest, and was standing in the queue for introducing myself. At time of my introduction in front of the large classroom with probably 50 people at most, I was shaking, about to faint. They could see me shake. My voice would escape me. The introduction did not go well, but it changed my life. It changed me through starting to work on my handicap of fear of crowds and public speaking.

With public 59 talks in 2015-2017 timeframe, I can say that the change is quite relevant.

Public speaking is not an inherent talent, but it is a skill we can practice. It's a skill I have practiced, and an area where one is never ready. All it takes is a decision: I will do it. Opportunities to practice are everywhere. Start safe with something you know and care for, with a short talk, with an audience that shares your interests. They want to see you succeed and they are interested in your framing of the same topics. You don't have to travel across the world for your first talk. Talk at your own company. Talk in your local community. Don't worry about failing, we all do bad sometimes.

I've learned not to do theoretical talks as they don't sit well with me - I speak from experiences and cases, even if those cases are full of imperfection. I've learned to take down the amount of text on my slides as my comfort levels of speaking went up. And I've moved from talks to live demos. There is no one recipe for a talk. I find the recipe to seek is bringing yourself to the talk and for me, it's been a long journey of experiments with all sorts of topics, approaches and emphasis.

I recently listened to Agile Uprising Podcast's Women in Agile -episode, and listening to it made me upset. One of the panelists introduced the idea that it's bad for all of us women if a women who shouldn't be speaking (not good enough) gets a chance. It made me think back to the chances I was given to practice. I wasn't always good. I got better by practice. Speaking isn't a one off thing, but it is a journey. Amongst all the average men, why is one average woman framed as bad?

Getting ready to yet another talk, I ask what I've recently asked so many times: why do I bother? Why do I speak at conferences? What's in it for me? As my friend reminds me, there's three things conference speaking gives me.

  1. Network and connections. My network isn't of immediate financial value to me as I'm not a consultant, but I've found (and keep looking for) special people to connect with. People who can inspire me, teach me and keep me honest when I'm learning. 
  2. Strive for learning. Speaking in conferences gives me chances of participating in conferences I couldn't be in otherwise. Many of my colleagues talk about years without being in one, and I go to tens of conferences every year. My unique position as someone who goes out a lot in combination to my personality makes me the person who drives my organizations for better directions. I know what is possible outside my company, and I don't believe in "impossible". I'm never completely happy with where we are not but always seeking for options to do things better at work. 
  3. Setting an example. I speak so that people like me would dare to speak. People who are not consultants  but practitioners. People with severe stage fright. People who don't see people like them on stage, just like I still don't see people like me. We have still unrepresentative amount of women in the field of testing on the stages. 
I try to remember this when I'm in a different country while my son is sick at home and I can only offer my voice as comfort. And I reflect on this as I work to pass the ball forward, to take time to stay away from conferences to work on other projects. 

If you are someone working your way into the speaking circuit, either right at the beginning or anywhere on the journey, and believe my experiences could be of help, please reach out. If I can be a speaker, anyone can. 


Friday, November 18, 2016

Thinking you're the best

I've been to organizations where we talk about "being the best". It kind of strikes a chord with me, in particular for the fact that I believe I'm really, really good testing specialist - and a decent software generalist too. But today, I'm thinking of the risks of thinking of being the best.

If you think you're the best, you think there is nothing to learn from others, and in the fast-paced industry of software development, that attitude would be irresponsible and dangerous. If you are the best, you often view yourself as an individual contributor who may also fear being revealed not to be as perfect as she'd like to be when in collaboration settings.

For years, I prepared in the previous night for every relevant meeting. I went in with a ready-made plan, usually three to prep my responses for whatever might emerge in the meetings. Back in school, my Swedish teacher made me translate things out loud every class, because of my "word-perfect translations". Truth is I had them pre-translated with great effort because I was mortified with the idea of having to do  that work on the fly.

Through my own experiences, I've grown to learn that the pre-prep was always my safety blanket. I did not want to look bad. I did not want to be revealed. I was the person who would rather use 3 days on a half-an-hour task. And I would say it was for my "learning". It was for my "personality". But truth is, it was for my fear of not being perfect.

This might explain why I'm nowadays such a strong proponent of collaboration: mobbing (safer) and pairing (personal stretch). When two non-perfect people work together, the result is magic.

Pull systems

If you've been around agile a while, pull systems are probably a thing you've heard of. This is not directly related to pull requests that are requests to review code, but with pull systems the idea is that in a process with many steps, the step after you determines when you should be producing. This is to avoid inventory (stuff lying around without moving forward) that is a form of waste.

If I think of my testing as a pull system, the consumer of my results is the developer making changes or the business person making decisions on accepting risks knowingly. It makes little sense for me to produce information that no one wants. There needs to be a consumer for that information.

However, the stuff we work on does not include clear cut rules of what people are interested in. No one can tell me exactly what these different stakeholders already know, and what information they find useful, or even more, if their finding something useful is correct as per their stage of learning about what is relevant. So, I shoot some info at them and stop to look at reactions. I try same or similar thing several times and stop to look at reactions, including if they even notice I'm trying establish a pattern. I have a heuristic that at a point they still reject the info and they see the pattern without me pinpointing it, we may be approaching a point where my time is best used delivering a different message. Except that time changes things, so it makes sense to try again later because it was clearly relevant enough in my perspective to conspire on getting the message across in the first place.

There's again a trigger I'm thinking through this. And the trigger is me realizing that pull systems thinking had become very ingrained in me and, as it turns out, some of my closest colleagues. Taken far enough, it means that no information is offered without you pulling it. Broadcasting things in hopes of serendipity can be considered waste.

One of the biggest puzzles for me with the new job in the two first months has been that it took forever before anyone offered me info I did not have to go hunt for. As a new employee, I felt lucky I was not here for the first time, so I knew things from the past and had the means to go for the information. But at end of every day, I've felt exhausted. Every day of work has required all my senses to be fully aware, and nothing has come easy.

I never sensed that this was ill-intentioned as whenever I would have a question, the response was overwhelmingly positive. No one refuses to help me, not by words, not by expressions. They are there whenever I know what I need from them.

And finally yesterday someone voiced out the words "pull system" when discussing my experience in our first ever (during my time here now) retrospective. We've been trained to think of pull rather than push, when we're trained agile thinking.When no one voices the idea of pulling information to a new hire, the new hire can end up feeling overwhelmed and exhausted without a good reason. In my thinking of plausible explanations of the behaviors, values/culture were high up on the list, but my guesses were targeted more towards individualistic culture ("I'd rather work by myself") than the idea of encouraging pull systems.

And there is such a thing as taking pull systems too far. When you're in a discovery process, you don't know what to pull unless you first discover more. Without broadcasting some information, you will minimize serendipity: the lucky accident of something relevant finding you that you did not know you could know.

The world is a balance. I'm still a big believer in pull, but some of the stuff we just need to push. Finding the right balance is fascinating, and I will think about this a lot more with regards to the ways I test. I still believe in teaching my stakeholders to pull information from me, remembering that I'm not the one fixing bugs here in the middle of the night if they escape into production. My expertise is valuable, and through pull system thinking, deemed more valuable by the consumers of the information I provide. But there might be stuff I need to just push through broadcast more. 

Thursday, November 17, 2016

Don't set me up for a failure, it will happen organically too

I appreciate pull requests. The idea that one of us changes something and it goes through other people's eyes is wonderful. I like the idea of not being left alone. I like the idea of getting feedback and suggesting improvements. And I see this works well on some of the cases, usually ones where the size of the pull request is small, the scope of it is clear and where there are knowledgeable people to comment on stuff.

However, recently I've also had the pleasure of following pull requests that end up in long discussions. I've had my own 1st pull request in a new company rejected without a good reason. And I've seen that a process that serves some well is really painful for others.

So on some of the activities that end up as pull requests, I keep repeating: we would be better off mob programming. Pull requests can turn into:
  • wasted time on implementing a change that isn't welcome
  • wasted time on arguing over authority: who gets to decide what is good enough or right?
  • back-and-forth discussion over long period of time trashing all focus of other work
There was a twitter comment behind my need of writing this post with regards to my note on "I'd rather be mob programming":
I find that people learn in different ways, and I don't see anything inherently off with learning while in a mob as long as I feel safe. With smaller number of voices, I may learn bad practices that I need to unlearn later. I may be left alone with the conflicting information from two "seniors". And even worse, I may get completely blocked on my attempts to contribute. You wouldn't believe the reasons people find when they go looking for why nothing I do is ever good enough. Except that it is.

In a mob, I'm not a victim of one person's perspective. If I use the wrong words to express things, I find that other people help out and get to the core of the idea. As long as there is kindness, consideration and respect. We need psychological safety, and establishing that over PRs if it does not yet exist or gets forgotten is almost a mission impossible.

When I'm new, you don't need to give me tasks to try me out alone. I don't have to sink first to learn to swim. And even worse, as someone new, having me work on something only to correct me through pull request discussion (or rejection) isn't really setting me up for feeling like I belong.

In specific, this tweet makes me want to express my feelings more clearly:
Trust me, you don't need to set me up for failure to expose weaknesses or opening educational opportunities. Those emerge organically when working on things. And while getting to those educational opportunities, the idea that I really dislike is that there is no better way: just read the code alone. Because there is a better way. It could be pair or a mob. If you are a woman working with the guys who keep telling that women only write comments in code (old stuff, sure) you will always feel safer with a group. In group, people are on a better behavior. Even if it gets called a Mob.

Wednesday, November 16, 2016

A Pull Request Ping Pong

There's a schema that needs doing. Nothing special. Just names of things. Allowed values. Defaults. And from my past experiences, doing this badly early on is a source of many bad things when multiple systems are supposed to communicate (and won't) or when someone realizes that it should be called something different. I tend to come with this feedback from the testing angle, so better get on it early on to avoid ripple effects.

We meet and agree this works needs doing and we will do it as everything, with pull requests. Just change it for the better, each of us individually. Nothing happens, so I bring us together again to agree on principles of naming, organizing and formatting and we walk out with some rules I can act on, and agreement that I wouldn't but that everyone else ("programmers") would. We agree also on a timeframe.

Timeframe passes and half of the people have done changes. No one has really done changes we agreed to be applicable throughout the schema. So with deadline, I both remind people (and stuff happens) and volunteer to do some of the consistency and documentation related work on it.

I get sucked into the fascinating process of pull requests. A lot of them, on a short timeframe. And a lot of them having a lot of discussion. I experience the wait time. The rework. The ping-pong. The silently undone work that no one volunteers to do as it might be rejected as there isn't a principle on those yet.

Like for naming of fields that have a time numeric value to include the information if it is this time milliseconds, seconds, minutes or  hours. Like for knowing exactly what each value does to ensure the names communicate the right things through several systems. Like checking default values for consistency when they are spread around in a few files that none finds immediately their own.  There's a lot of hope of "someone else's work".

With the time we used on the ping-pong, we could have met together and done a lot of this work. So I wonder where the idea of the excellence of pull requests for this comes from. And all I can think of is the idea of being able to work alone. Pull requests are appearance of collaboration when there is also real collaboration: pairing or mobbing.

I think of this the next time someone tells me mobbing is ineffective. There's still so much of harmonization work that none volunteered for today, with the hope that someone else will deal with it. Or, time will take care of it. When there's more than this one place of dependency, it's clear it will no longer change. It's now or hard. And I prefer getting things done.

A great reminder from a friend:



Saturday, November 12, 2016

Triggers to search for a new mindmapping tool

I've used mind mapping tools for years, and recently my go-to choice has been Mindmup. I've used it in most of my public exploratory testing presentations and usually having at least a few people come check the name of the tool again after the talk. I've been surprised they did not know about usefulness of electronic mind mapping in exploratory testing.

As always, this Thursday I clicked on to mindmup.com to get to the tool and opened a map for my mob to use as note taking tool while demoing mob testing.

The tool had been updated to a visually new version, which mildly annoyed me, but I take that as just not liking tools I like change. There were no problems during the session, the group created a map and all was fine.

After the session, I proceeded to do what I always do: save the map in my gdrive collection of maps different sessions have created. And I couldn't do it - all I was getting was an upsetting info of buying Mindmup Gold.


That sparked both a tweet of frustration that Mindmup twitter account quickly picked up without hashtags or mentions.

Later, I was forced to pair with a friend to "solve my problem with saving the mind map". I wasn't particularly happy. I just wanted to change tools. In the world of options, I felt it was time to move on. If nothing else, the pairing revealed two things:

  • Where I went wrong with ending up with a map I couldn't save as I wanted to
  • How to work around not being able to save as I wanted to
Where I went wrong?

Nowadays, I have several clicks more to go through to get to a new map than in the old version. There's a landing page first. Then there's a page with buttons to start a map. And finally I'm in a map. 

Unsurprisingly, I wasn't particularly keen on looking at the new screens or paying attention. So when I thought the second screen had two buttons, it actually had five. 

I never realized that I should have chosen a storage option before creating a map, as that was conceptually different from previous versions of this product, but also most of other products I use. I create and I save - in that order. I don't plan on saving before creating.

The workaround

Instead of being able to save to gdrive, I could now export a map. And probably I could save the exported map on my gdrive, yet I ended up importing it to a map using gdrive as storage option. 

So? What now?

I'm still upset after two days, even if I'm more calm than before. I very much dislike the extra clicks, and I'm ready to pay for my mapping tool. I'm looking into options. I was recommended Coggle or MIndmeister, and I'm now playing with those to see if I like them. I'm probably going to check Xmind if I would go back to it.

Or, maybe I'll feel differently soon. 

The experience, however, is showing me how I deal with software frustration. One annoys me and I'm paying for the others. That's kind of funny.  
 

Your choice, explorer!

I had five amazing, insightful volunteers for my demo exploratory testing mob at Testbash Philadelphia. Their open reactions made the experience for me even more worthwhile. In particular I remember when one exclaimed: "Interesting! Oo, I don't like that.", when another said that "I wouldn't write anything down before I know more about the application" and a third confessed that the experience was both fun and exhausting but the chosen style of exploring wouldn't be their chosen style of exploring. Fourth mentioned being unsure of rules and how strictly should those be followed, and the fifth was missing a specification to begin with. The volunteers individual approaches got many participants to approach me later, wondering about the options. What would be the right way to go about exploring the application?

I find that one of the powers of testing in a mob is how it reveals our personal differences and preferences. My volunteers were all from different organizations and locations, with different built-in personal styles. Being different is a strength, helping bring in different perspectives. But to work, on a longer term in particular, in a mob, they would need to build common mechanisms.

They could experiment with different approaches. Let's agree not to write *anything* down for an hour. Let's agree to write only bugs down. Let's agree to only focus on listing functionalities. Let's not touch the application but go look for online documentation of what to expect. All valid approaches.

Working together would create a mix of recipes the group uses to bring in things each feels is relevant, giving chances for the ways of others. They could agree to spend an hour in being mainly navigated within the intent of one, perhaps starting from the one least strongly formed. Instead of choosing one, try all styles. Let the unlikely show its power first.

Within the half an hour of demo, we did not go too deep into letting intent emerge from the group, but I gave the box to play in. We saw it was hard to stay in the box, that problems all around the application made it even harder.

So what would be the right way to start? My one rule is variance. Do it different every time if that is possible to maximize your chances of serendipity. What goes first determines what can come second, and the varied order of things helps you think in different dimensions. It's not just one way, but whatever you do first already changes the state of your knowledge.

Your choice, explorer. Just realize the choice exists. 

Friday, November 11, 2016

Ruining the life of the introvert

I listened to an insightful talk by Elizabeth Zagroba on Succeeding as an Introvert yesterday, and throughout the talk I kept reflecting my own thoughts. I thought about how awful the idea of Mob Testing I introduced first thing in the morning must be for someone who identifies strongly as an introvert. I thought about how I've been regularly labeled as extrovert, yet how I recognized most of the introverted aspects she was describing in her talks as things I do - like thinking critically once more after the meeting is done. I thought about how with those definitions, almost everyone in Finland would be an introvert. And I laughed at my cultural discomfort earlier that morning when the Americans decided to talk to me in the elevator.

I remembered back to my old job, citing one of the developers who wasn't particularly happy with me bringing agile ways of working into the place: "You are here to ruin the life of introvert developers". I hoped I wasn't then, and I still hope so.

Mobbing may seem like a nightmare for the introverted. Sitting in a room with people the whole day, and forcing the task to happen on one computer through speaking up about what should happen on that computer. It sounds like it's hard to disengage, as there's a continuous rotation of each mob member being on the keyboard. I had no time to address this in my talk, so I thought about writing on it.

When reading an article about how Google builds great teams, I had written down a quote:
Good teams are not teams where introverts are left by themselves but ones where they feel safe and can open up. 
We all, introverts and extroverts alike, want our contributions to matter. A great team is one where everyone pitches in, in ways they find comfortable. Leaving introverts entirely alone would't work, and having the special traits of introverts available is an asset.

A lot of times, introverts struggle to be heard even more than extroverts. In a functioning mob where "kindness, consideration and respect" is in action, introverts might have a better chance of getting their insights incorporated. In a delivery process that welcomes ideas and feedback at any time, the time to reflect to come to an idea is a non-issue. In a mob, anyone can use a second computer while working on the shared task for research activities. A lot of times, this enables an individual to contribute just the right thing at the right time, to keep the overall task flowing. In a mob, anyone can step out at any time, going for a walk to take quiet time, and just rejoin when ready. The work continues meanwhile, and after.  "Ask what you need" was Elizabeth's message yesterday, and that resonated with me.

Mobs are not just for extroverts. I have utmost respect for Aaron Griffith, with a test automation background and an integral part of the original Hunter mob and a self-proclaimed introvert. His article on Mob Programming for the Introverted is a great reference to someone's experience who isn't just thinking about introversion, but living a life of one - in a 40 hours a week mob.





Sunday, November 6, 2016

Mob Testing at the New Work

I'm absolutely delighted to have tester colleagues again. Well, we call us quality engineers, but the name does not change it - there's some pretty amazing testers out there to share with. The group of testers makes me remember times I was not dependent on Twitter to find people to learn with as I was while being the only tester amongst all the developers. And makes me feel torn in my priorities, at least a little.

Since I joined, we've restarted our regular meetings we call QE Forum. And created us a chat to discuss and share. A lot of positive energy around. Just last week we did a lean coffee at work, and I learned a lot from my tester colleagues both is in the topics they're into and the stuff we discussed deeper than titles.

The collaboration with my new-found colleagues has made it clear we're divided in our interests in general, while I seem to be interested in both camps. Usually it's either test automation or deep exploratory testing. And the divide also shows in my efforts to introduce learning together through mob testing.

We've done two mob testing sessions so far.

  • Mob Exploratory Testing on a functionality I was working on at the company
  • TDD in Python to improve our programming skills in the language we use to automate
The TDD session gave two surprising (to me) reactions. First was a non-programmer colleague who chooses to join programming sessions when I organize them in mob format. This experience brings back memories of how mobbing changed me: from reluctance to curiosity, and through learning to liking. Another was a programming tester colleague, who is now interested into moving from system level automation to helping developers with their unit tests. 

The Mob Exploratory Testing session was just fun and laughter amongst us finding problems I had not yet paid attention to in the feature I brought for us to test together. It introduced tools that no one else had told me about before that the others thought would be evident, but how could they be for someone who just joined the company. I introduced approaches to testing the feature that went way beyond the user interface, and we made interesting observations about limitations of the implementation too. 

So getting my tester colleagues to practice mobs seems doable and fun. The learning from group sessions makes us stronger in our individual work. But the big step is still work in progress: getting to do mob programming with my new developers. That may take some time, as I'm not ready to push people too much even if I believe in it being helpful. 

Not the same task

I share experiences and thoughts I've had on mob programming and mob testing from my starting points and perspectives. I don't get to (yet) work with teams fluent in promiscuous pairing, I still struggle with getting people to pair with me across specialties (tester - developer -pairing) and a lot of the problems I'm facing are related to inability to make releases, or efficiently collaborate so that a lot of time would not be wasted on either waiting or doing the wrong things.

I wanted to address a specific issue in the idea of doing a measured experiment on mob programming vs. pair programming vs. solo programming: the experience that it's not the same task and end result we're doing and getting into. 

The Ladder Effect

I was mob programming with my team, and we all perceived we were working on something relatively simple: we needed to take out old Telerik UI components and replace them with new family of Telerik UI components. It was a task that most of my team's developers had individually done on other areas and we were mobbing just because we wanted to practice working together - learning together.

One of the developers was navigating and telling what to do, when another stepped in to suggest a better way. As the two had different ideas, we got a third idea and a fourth idea, and had a quick discussion on their benefits. The fourth idea would have never come about without hearing the first three proposals. This is what I think of as the ladder effect: you need the input of others to come up with the idea one of you has that the one having it thinks it is obvious and it turns out it isn't. 

If each of the developers individually or in pair would complete the exact same task (which of course we don't, we're not researching but working to create value to production), the task is not the same. The resulting code is not the same. 

Not the same may mean it's irrelevant. We always have bugs in production, and some of them are just not so relevant. It also may mean it is relevant. That there's work not done, to be done later either as fixes or as slower progress on the next features due to what was missing from the code. 

To get the four ideas in, we would have needed an implementation, people caring to review in enough detail, at least once change of the whole approach to the change with a complete rewrite. So we shouldn't compare solo vs. mob with the idea that the end result in production is the same. 

A Stupid Little Thing

In a solo work setting recently, I was looking at code to realize that the encouraged practice is to include a copyright header in each of the code files. A lot of the files we had recently been modifying had a reference to year 2012. In many ways, this problem is below cosmetic, as it is never visible in any way to the end users, as we don't publish the code. But housekeeping-wise, there's still a working agreement we'd update this to reflect current year on edit. 

In individual work, whoever is bothered with these can go and fix them. But hardly anyone is bothered enough to do anything. 

In a mob, when this comes up, the group realizes quickly how stupid and repetitive task updating these is. Where each individually would either just fix ones in their scope or dismiss the problem completely, the mob hearing a second mention tends to recognize a theme they don't want to hear about. And recognizing a repetitive thing that wastes effort of the whole group, it gets automated. 

A Big Difference

One of the projects I recently completed with my team was a 6-month long refactoring/rewrite of something build over four years. Little did any of us know back when the problem started developing... But looking in hindsight, I can describe the problem. There was a solo developer would always get features done and into production, not the fastest of them all. He believed that code should be a tree in which you introduce a new branch when you get a relevant new requirement. By the time we realized what the beliefs were, he had seven full copies of the code conditioned to different versions of the code from start of the program. 

Someone did review it, but the reviews were in vain. In face of a seemingly impossible task, people step down and focus on what they were supposed to do. While others in the team are conflict-averse, I have a tendency of taking up things others don't. Thus the rewrite, as a pair. 

If we were mobbing, that result would have never happened. The developer creating the code with a significantly different belief system in what makes code maintainable could have learned sooner. Instead, he was doing code as he had done for the last 25 years. 

Whenever I paired with the developer even for half an hour, I was exhausted for a week. I'm sure others had the same feeling. Working with him in a mob, we could share the load of dealing with the differences. 

If it was up to me, that developer wouldn't be a developer. But it is not up to me. So I do what is the second best choice, help him learn. Mobbing was powerful in both building up the team to face the challenge, support in dealing with the challenge and eventually find the courage to fix the issue we had let build all too long. 

Starting to pay back is more relevant than cost

All of the last stories I share are really about the task or the cost of the task to get it actually completed to the same level. But a lot of times, I find, the cost does not matter - time matters. If we have something in production in a week even if it took five people mobbing, over one person working for it for three weeks,  we'd rather get to a point where the investment starts to pay back faster. 

We win customer gigs for speed - things that would never be our business' points of success otherwise. We have time to earn more money with the sooner availability. The little extra cost on development can become irrelevant quickly with scale in ability to sell the software. 

There was a feature we said would take 6 weeks for us based on our previous experience on similar features. We mobbed on it for an afternoon with 7 people, 4 hours each. A pair of us continued and the feature was done two working days later. So 6*40 = 240 hours and 7*4 + 2*2*7,5 = 58 hours. 

I'm sure the first estimate included significant wait times and interruptions that would happen a lot more over longer period of time. But the realized effort left a lot of time for reading books and studying that tends to be padded into our estimates, even if mobbing and pairing took the embedded time into that away. Fun is powerful mechanism. Yet, the cost isn't relevant. We won a customer case for that feature. That's what is relevant. 

Conclusions

If I would want to compare individual work to mobbing, I would need ways of making the task the same. Both the individual and the mob would need to produce a result with the same features:
  • Same value for production, including same lack of bugs
  • Same value for maintenance, only to be revealed long-term
  • Same positive impact on the future tasks done individually 
  • Same positive impact for the business overall
My evidence may be anecdotal, but it is enough for me to try more of this out. I welcome someone with resources to do proper research to do a good comparison. Then again, there's a gaping hole still on good research on individual and paired performance too. Setting up repeatable experiments is a lot of work, and meanwhile I'm happy to help my organizations improve using mob programming as one of the tools.