Friday, December 30, 2016

Privilege of location

In the last two years, the world has become smaller for me.

I've been in a relationship involving a lot of Skype discussions and grown to appreciate how things that could have been financially impossible in time before free calls over internet are now something I can rely on. I've fallen asleep with Skype on, and woken up with Skype on - before it turned into more unreliable and started disconnecting.

I've actively learned to initiate discussions with other professionals over Skype and learned to pair test and program over join.me or Screenhero. I've met every person who submits to the conference I'm organizing as if we met in person, over Skype. And when solving a problem at work that involves remoteness, I'm now one that suggests moving from text to voice because I know it is possible from experience.

When I travel, I just decide where I will go to. My Finnish passports lets me in everywhere and most places without any additional paperwork with regards to Visa applications.

Recently, I've grown to see what huge privilege this is. And how, while the world is small for us all with technology, the physical presence aspects are not quite as straightforward.

For European Testing Conference 2016, I invited Jyothi Rangaiah from India to speak in Romania. I was totally unaware of what it would mean to me as a conference organizer. She needed to apply for Visa. For that she needed an invitation from the conference organization registered in that country, tickets purchased and hotel reserved under her name. So we booked her hotel separately from the others. She bought the tickets. We wrote an invitation letter. With all these papers, she would need to travel to another city to apply for the visa. And it all failed with the fact that the conference organization is registered non-profit in Finland, not in Romania. We naturally covered the costs, lost the work and did not have Jyothi present.

For European Testing Conference 2017, we decided to try again. And I'm looking forward to being able to introduce Jyothi's talk to our community.

As conference organizer, I've put numerous (extra) hours into being able to have her here. I expect there's still calls or even visits to embassy before this is all done. I respect her persistence to go over more obstacles that I ever before realized there was.

But this also makes me think. While I believe in inclusiveness, how much better the Indian speakers have to be to be included, for the extra effort they require? How much higher my expectations are going to be for someone who is both financially and effort-wise more costly. Out of the vast numbers of new speakers, why would I choose to invest into someone coming from a "complicated" location.

For the the answer is simple. I will do it because I believe in change.

But stop telling me this:
For the vast majority of people, they never get to consider speaking at a conference. The fact that we don't know is called Privilege. It is not our problem.

Hat tip to all people who make it as international conference speakers with less privilege than what I have. And as white (not rich) female, I have a lot of privilege, even if white (rich) males tend to have even more.

Thursday, December 29, 2016

The Special Kinds of Testing that Aren't that Special

My tweet stream today inspired me to write a bit about why words don't matter quite as much as we seem to think they do. It wasn't just one tweet that inspired me, but several. But if I would name one, this would be it.
 I wrote myself a little post-it note saying:

Isn't all testing
   ... model-based
   ... risk-based
   ... exploratory?

Having done model-based testing, it distinctively describes (to me) testing that is based on explicit, programmable models.

Having done risk-based testing, it distinctively describes (to me) a way of testing where we identify things that could go wrong, their relevance and test with a focus to that information (as opposed to requirements-based testing).

Having done exploratory testing, it distinctively describes (to me) an approach to testing where thinking and learning is in the core over repeating maneuvers.

We combine existing words to find ways of communicating ideas. Usually we need more words put together than two. The order and number of words depends on the person we're communicating with, not just on the person delivering a message.

In words, more is better. These special kinds of testing are special, and they are not. The labels won't alone help you see what is included.  

The Model Competition

There's this thing that keeps happening to me, and it makes me feel like I'm not good enough. Great people design a feature without me. They do the best job discussing it, write something down of it and later on, I join in. I look at what was written down, and talk the the people about the discussions that took place before me. And I just can't make sense of it to the depth I want. So I hide in a corner for a day, do my own search and design, seeking for motivations of why would anyone want to use this and come out with a private version of document, rewritten that has little resemblance to what was there in the first place.

It has happened so many times, that I've gotten over the first idea of me somehow being better in seeing things others don't. And I see others doing it to me, if they just have the approach of daring to take a day of reflection time.

The version I start with is not the feature, it's a model of the feature depicting parts considered relevant in the original discussion. The process of discussing was relevant, and I missed that. And the process of discussing would have been fueled by its participants reflecting on whatever external information they deem useful.

What I do is create another model. The value of my model is to give me time to build my understanding, but since I have more to build on than those who came before me, there tends to be aspects of my model that are missing and deemed relevant.

No model is perfect. The idea of constantly improving and looking for new perspectives, models, to replace or enhance the old is needed.

This time, my model won over the older one, and got promoted as the common piece around which we communicate. And I just wish someone like me comes along, looks at it and does not accept it at face value. But it does not happen to me by just showing up, but there significant work for creating a model of perspectives we may be missing now.

Wednesday, December 28, 2016

The Stress of Artificial Deadlines

"But we promised", he exclaims. There's a discussion going on about how most people are not proper professionals, and how little they care, and I'm inclined to disagree. The promises are thrown into the discussion to show one cares and others don't. "I've worked hundreds of extra hours because I care about the promises" concludes the discussion. The privilege of being able to find those extra hours is one thing, and the personal stress the extra hours have given just may not be worth it.

For the last four years, I've worked hard to get rid of promises because making those promises has, in my experience, more side effects that positive impact.

I'm one of the people who in face of a deadline don't perform better. I get paralyzed. Knowing I have two hours to work on something that could be done in two hours but also could take four, I tend to not start. With free space ahead of me, I work productively and complete tasks. Thinking about how long is crippling to me. So many people assume I will answer those questions, so over the years I've learned ways of managing my contributions in ways that suit me.

I've worked with a lot of people who in face of a deadline use significant time on making excuses or covering their backs. "You did not tell me of this requirement", "I was sick for three days", "the dependencies were more complicated that I thought". All of these are true, but the more individuals commit to deliveries, the more they also commit to making sure they're safe.

I work to change my world to daily (or more frequent) releases, so that we can plan (and get surprised) with an item at a time. I work from the premise of trusting that we'd rather do good things and make an impact in the world we're contributing to. I don't want to be one of the people who on a personal level carries the responsibility of all surprises just magically adding hours to work days, I would rather be one of the people who delivers value all the way to production fast trusting there's always another day to do more.

Two week sprints, sprint commitments under uncertainty and all the clarification ceremonies and blame assignment when commitments fail seem like such a waste of smart people's potential.

Some of us quietly invest in the ceremonies others want. Others work to change the ceremonies to focus more on value.

I'm so happy that the big deadline up front has moved to a small deadline close by. It's better for the company but it's definitely better for me. The stress of artificial deadlines can be all consuming.

Tuesday, December 27, 2016

You may not even know how much you talk

"We should try being quiet to hear better what the other team members have to say", I suggested to the in-room product owner. He was game as long as it wasn't just him who needed to temper down his speaking but I would have to do that too.

And so we agreed. We would work on actively listening, and actively not saying what should be done. We would make room for others.

This was a practice I've suggested before. Working out ways of moving myself from the active speaker in meetings -role into speaking in the background, encouraging those with less volume to say what they want to say, and to be allowed to make their own (inexpensive) mistakes. Practicing not taking the last word (or competing for it), not having to contribute even if I thought I knew better and being quiet. And it isn't easy.

I'm perceived as the one who always speaks up. Sometimes I speak up of things others are afraid of saying. Sometimes I say again things others are saying but not being heard. And sometimes I just have things to say myself.

I speak enough to remember negative remarks about this habit. The colleague a decade ago who told me in moment of frustration that only those who don't know how to do things talk this much. The moments where I realize I'm filling in other's people's sentences or interrupting thinking I know what they will say anyway.  I don't recall people complaining much on it, but it often hits my self-improvement analysis filters of things I would like to improve on: patience and the timing of my responses.

Looking into gender research, I came across the idea of a listener bias: the idea that we are socialized to think women talk more than they actually do and feel like they are dominating discussions easier than men. It lead me into a continued path of investigating the real dynamics. But as a first step, it inspired me to bring 'changing the speaking power dynamic' as the single rule for a women in agile summit at Agile Testing Days.

In my group of seven women and one man, getting excited over the topics we were discussing ended up with the single man taking more than a third of speaking time. I asked later if he realized what had happened, and he did not.

In another group, I asked a man in the group how the discussions went. He told me of pains of not contributing as much as he had as he had been an expert in the topics, but decided to play by the rules given and speak less to give room for the women's voices.

With the experiment of quietness I share with my product owner, I will also take another one. I just installed a gender timer to pay attention to how much we actually speak. My time to move from feedback and perceptions to a period of measurement. That should be interesting. 

Saturday, December 24, 2016

Test Automation Leadership

I love being "just a tester". I've recently had great success in my company expressing what I do to non-development people explaining I'm "just a tester" but they know how no one is really just anything. It gives me a quick way of getting to breaking their perceptions of what "a tester" would do, and immediately opening their perception to the ideas that I will be more just like anyone else. I don't need to explain all the details to make the connection for us to share on work.

I don't act like just a tester to most people's stereotypes. And I hope I break people's stereotypes because I still am a tester, even if no one is ever just anything.

I think of myself as caring and self-organized. I try to think what is the best for those I've committed to (my company being one) and I don't expect others to come and tell me my tasks. I welcome suggestions of tasks, I love being asked to participate in providing feedback and I actively frame tasks to drive forward business value while learning about business value.

So last night when I listened to great podcast interview on Hanselminutes by Angie Jones and Angie talking about "Leading that effort" as automation engineer, I finally gave in to realize there's a wordplay I've been doing. I say I'm not a leader. Yet I lead all sorts of improvement efforts. I lead because people choose to follow. Speaking up about things that others listen to seems to be leadership. Starting examples of work others can join to seems to be leadership. Making sense of our goals and approaches to get to them and sharing observations seems to be leadership.

People who are followed are leaders. Which made me think of a short video everyone should watch.

I can easily see Angie leading efforts in test automation as a skilled test automation engineer and a very outspoken personality. I can't see some of my colleagues leading efforts in test automation, because their focus is on the details of automation code, not on vision of where this would go and how it could be better.

We need leaders and we need experts. Sometimes those are the same people. Other times not. And I would suggest that you can't define leaders based on their work titles but more through their personal characteristics.

Confessing I might be a leader after all brings another perspective to me. It still does not mean I should be "held to a higher standard". It still does not provide an open checkbook to my time on debates on anyone else's terms. I'm around to learn - with others.


Throw one away to learn

I sit in front of a computer, testing Alerts. It's a feature where something happens on the client side that triggers a dialog for the user but it's not enough that the dialog is presented to the user, there's also an admin somewhere supporting the user that needs to know and understand what went on without showing up behind the users back just to understand. I keep triggering the same thing and as I move back and forth between my points of observation, I make notes of things I can change. I test the same thing to learn around it, to keep all my mental energies focused on learning.

Around the same time, my test automation engineer colleague creates a script that does pretty much the same thing without human interference. Afterwards she has only this one script, I have 200 tests and I call her test shallow offending her. It's a good start, but there's more to testing of the feature than the basic flow.

The main purpose of the testing I do (exploratory testing) is learning. The main purpose my test automation engineer colleague does is long-term execution of checks on previous learning. I believe that while keeping track of previous learning is relevant, the new learning is more relevant. Bugs "regression" does not just happen. It happens for a reason and I love the idea of building models that make my developers think I have magical superpowers finding problems they (or I) couldn't imagine before.

Last night, I listened to a great podcast interview on Hanselminutes by Angie Jones,  I got triggered with Scott Hanselman repeating the idea that you should automate everything you do twice. With that mindset, me repeating same thing hundred times to feel bored enough to pinpoint the little differences I want to pay attention to would never get done. Angie saying "we can automate that" and facing resistance from people like me has so many more layers than the immediate fear of being replaced. It has taken me a lot of time to learn that you can automate that and I can still do it manually for a very different purpose, even with people around me saying I don't need to do it manually when there is automation around.

My rule is: automate when you are no longer learning. Or learn while automating and learn more whatever frame you need for it. My biggest concern on test automation engineers is their focus on learning - about the automation toolset over the product, and the gap in quality information that can create if not balanced.

This morning, I continued by listening to Brian Marick on useful experimentation with with two programming examples. He talked about Mikado method where you throw away code while refactoring with tests, with the idea of learning by doing before doing the final version. And he talked about Corey's technique of rewriting code for user stories of 3-days size until a point the code writing fits, through learning from previous rounds, into one day of coding. Both methods emphasize learning while coding and making sure the learning that happens ends up in the end result that remains.

I can immediately see that this style of coding has a strong parallel to exploratory testing. In exploratory testing, you throw your tests away only to do them again, to use the learning you had acquired through doing the same test before, except that it is not the same when the learning comes to play. Exploratory testing is founded on deliberate learning, planning to throw one (or more) away while we learn. So before suggesting we should "automate that", consider if the thing you think of automating is the thing I think I'm doing.

I have two framing suggestions. For the people who propose "we can automate that" - make it visible that you understand that you can automate some part of what we are doing, show appreciation for the rest of it too. For the people who reach negatively to "we can automate that", consider welcoming the automation and still doing the same thing you were about to do. For the people who actually think they are doing the same thing by hand without understanding what they are learning in the manual process: learn more about learning while you're testing. The keyword for that is exploratory testing.

Thursday, December 22, 2016

The Binary of Coding in Testing

I've just spent a week deep in code, and yet yesterday in a podcast bio, I introduced myself as a non-programming programmer. To start today, I'm reading a blog post framing "I am not a developer. I have a background in Computer Science, I know a thing or two about writing code, but I am definitely not a developer."

People care what they are called and I care to be called a tester and I've only recently tried fitting on the additional label of "programmer".

Yesterday, I got called "my manual tester" by the Product Owner with an instant annoyance reaction to both words "my" (he owns the product's priorities, not me or my priorities in a self-organized team) and "manual" (exploring with focus on test automation and thinking smarter than those who just focus on regression test automation, is that manual?).

There's this idea around with testers that there is somehow in coding "those who do it well". It hit a particularly sensitive spot seeing it yesterday after a week of reading and linting generally not the best code and thinking back to my experiences with mob programming that made me realize that there are not much of people who inherently all by themselves do it well. Some, over time, develop better habits and patterns, but even they have more to learn - and usually, the best of them realize learning is never done. There are not people who "do it well". There are people who "do it and learn to do it better". Other than the fact that there's a large group of people who don't do code at all, the rest of it is a continuum, not a binary in being good. Thinking of programming as something you either have of don't is not a good way to approach this. I rather frame it as a friend helped me frame it: everyone who has written Hello World is a programmer.
I usually rather leave programming for people who enjoy it more than I do. That's why I'm a non-programming programmer. But there's aspects of code that I enjoy tremendously. Creating it together to solve problems. Reading it to figure out if there's better ways. Comparing it to expectations that can be codified (lint) and patterns of businesses and real-world concepts. Understanding what is there (and could work) and what isn't there (so can't work). Making it cleaner when it was already made run in the first place. Extending it within the limits of what I'm just slightly uncomfortable with. And that is already quite a lot.

I loved how Anna Baik framed this.
I find that the binary thinking keeps people with potential out, and managers who would want to see more code in  testing overemphasizing code setting foolish targets that make good testers bad programmers. Getting rid of the binary, we would start talking about how the good testers could be good coders too, without giving up the good tester aspects they have.

It's really about learning.  This morning, I complimented my daughter's Play-Doh creation as it had significantly improved from first version she showed last night. Her words back to me serve as great reminder of the thing I so would love to remember every day at work:
"All it takes is practice and I will keep practicing". 

Tuesday, December 20, 2016

Respecting testing not testers

There's a recurring discussion I'm having with pretty much these points:
  • People who joined us as "testing specialists" are no longer testing specialists but specialists of everything software. 
  • We value good testing and good people with good testing skills, but we see no need of having testers or testing specialists. 
It's the same discussion again and again. I identify as a tester. I'm told by some of my peers I don't qualify because I am also able to do other things (and will, for the good of my company - knowing that a lot of times my special skills is the best I can do for my company as others don't have same deep skills). I'm told by non-testers that I can do testing, but not identify as tester.

This is a recurring discussion that makes me sad, and often makes me feel like I want to quit the IT industry.

This "we're all developers", "we don't let people identify as testers" discussion is a lot like the discussions on gender. Be it "guys" or "men", the women are supposed to feel included because "naturally these words include all people". 

I want to be a tester. I want to be respected as a tester. Any references to "monkey-testers but we know you are not one" are just as offensive as saying that I'm one of the men, "not like other women". 
 
It's not enough to see that testing is important. You also would need to see that testers is a description of specialty and a real profession. Stop pushing people out with wordplay, and just help them grow within (and stretching) their interests. We need good people to create good software, being inclusive would be a better route. 

Thursday, December 15, 2016

Mixed messages - learning to live with it

I was working with a brilliant tester, and she looked puzzled. "I don't really understand what you want. Can't you just make up your mind?", she exclaimed. She added details: "Last week, you wanted me to pay attention to not generating too many bug reports the developers would reject. This week, you want me to address bugs with developers that they wouldn't consider bugs with their current understanding."

I could see she was frustrated and hoping for a clear guidance. But I did not have it any more than she did. But I started to see her point. She was looking for a recipe to excel in the work she was doing. I did not believe there was a recipe you could follow, but there were all sorts of triggers to consider for trying out something different - experiment intentionally and see how the dynamics changed.



What we could get out of the discussion about what we were discussing, she could only add one piece to her understanding: things would never be completely clear, they would be changing as either one of us would learn and she could just do whatever she believed was right. It was her work, her choice.

As software professionals (and testers in particular), we get mixed messages and conflicting requirements. We work around them the best we can. Outsourcing clarify from a "manager" is one of the worst practices I can imagine for someone doing thinking work.

Take in conflicting information. And make your own choices. And remember: it's ok that not all your choices are right. Unlearning is sometimes more important than learning something in the first place.

"It depends on a context" is sometimes a comfort word we result in, when we feel there's a rationale that we can't express clearly. Right now I prefer to think of my lack of knowledge and experience in so many things that future yet holds as part of this context. We do things we believe are right and responsible. And we own both our successes and failures.

Wednesday, December 14, 2016

All people are decision makers

I was testing a new implementation to an old feature, and as a new person, figuring out what of the old is intentional and what is new. On many of the questions I would ask I got told "it has always been this way, and it's ok". I chose not to fight - I made an active decision not to care.

The reason I could decide not to care is that I know there will be more chances of addressing that particular detail. I knew that soon, I will rally together a group of "real users" to learn with on if things I think are relevant are indeed relevant. And changing it then and changing it now are really not that big of a difference. The users seeing a problem (and feeling they got heard) may just be more valuable than users never seeing that problem.

I make decisions all the time. I decide on what I test and what I don't test. I decide on what I learn in detail, and where gut feeling or plain ignorance is sufficient. I decide what information to fight for and when. I decide how hard I will fight.

There's an expression in the testing community that really ticks me off but I most often just try to dismiss it:
Testers are not decision-makers but information providers.
All people are decision-makers. But very few of us make big decisions alone. The big decisions (like a release after a two-year project) depend on the small decisions prior done well enough.

I'm a tester and I regularly and habitually make decisions on releases. Those decisions tends to be small in scale, because since agile, the world has changed. Should the testing community reassess more of the shared learnings from time before agile?

 

Tuesday, December 13, 2016

Pair em up while they learn!

"But you can't do that", he said. "It's going to be boring", he exclaimed. "You have to give them all their own computers or they won't learn a thing".

With 30 kids in the room, paired up in front of one computer for each pair, I had a moment to deal with a concerned parent who had strong feelings about how kids learn - by doing, not by watching.

"I know what I'm doing", I tried telling him. "I've done this before". "We don't just pair them. Just watch. We do a thing called strong-style pairing". He did not look convinced, but I needed to step out and attend to the class anyway.

We introduced the rules. "Who is in front of the keyboard now? You're the hands only, no thinking allowed. Who is not in front of the keyboard? You're the boss. Use words to say what you want the other to do".

The buzz, the laughter, the learning commenced. And they loved it, being fully engaged as we kept reminding on the rules: the one not touching the keyboard is the boss, and we change positions every four minutes.

The concerned parent seemed happy too. He followed through the session, and I saw him smile too. I suspect that was a meeting with a developer who has either never pair programmed or in particular, never strong-style pair programmed.

This event happened at my work, and the kids were my colleagues kids. The concerned father is not an exception. Adults are concerned when they know how to code on how their kids will feel while coding, because they care.

I've sat through my share of intro events to programming with my daughter. I find that a lot of sessions pair kids (esp. young kids) with their parents, and while they then have a personal tutor, in my experience they learn less.

So I work with the rules: pair the kids up. Parents are welcome to stay and program, but kids pair with kids, adults pair with adults. There's nothing more motivating for kids than realizing they're completing the programming puzzled faster than their parents. After all, collaboration comes more naturally to kids who are not yet worried about "efficiency" - results matter. Us adults should keep that in mind a little more often.

Monday, December 12, 2016

Thinking of Heartbeat

At where I work, every room seems to have an always on monitor for a test automation radiator. You know, one of these screens that, if all is well, turn dark blue after a while to indicate there has not been failures in ages (whatever the "ages" is defined to be). One where any shade of blue is what you seek and if you see yellow or red, you should be concerned.

As I've walked around, the amount of not blue I see has been significant. My favorite passtime, almost, is to see how people respond to someone paying attention, and the reactions vary from momentary shame to rationalized explanations.

One of the teams that has made more of a positive impact with their dark blue radiator got more of my attention, as the bright colors weren't taking the focus. I started reading names of the things and run into something interesting: Heartbeat.

As I saw those words, I had a strong connection to the idea of what that could mean. A small discussion confirmed: heartbeat tests are the set that gives us information on is the system we're testing even alive enough that we could expect any other tests give us any information.

For the things my team tests, to have a heartbeat we depend on the backend systems including a bunch of things running somewhere in Amazon Cloud. If any of the things we depend on fails, we fail. And for granularity, we want to know if the brokenness is for internal reasons (we should learn something) or from external reasons (we might have more limited control). 

With all the teams I see, I only see one that has a heartbeat measured with test automation separately.

One team's heartbeat (a core dependency) is another team's core functionalities. You only need a heartbeat when you can't observe the other is alive from other means - their test status are not reliable or available/understandable to you.

Disconnect of Doing and Results

For most of us, our work is defined with availability, not results. For a monthly salary, we make (the best of) ourselves available for a particular amount of hours every week, and those hours include a good mix of taking things forward for the company and taking oneself forward for the future of taking things forward for the company.

The weekly hours, 37.5 for Finland, remind us of a standard of sustainable pace. If you spend this amount of time thinking focused about work (and my work in particular is thinking), the rest of the time is free. The idea of the weekly hours helps protect those of us with a habit of forgetting to pay attention to what is sustainable, but also define the amount of availability our employers are allowed to expect of us.

In the thinking industry, the hours shouldn't really be the core but the results. Are we using our thinking hours to create something of value? Being available and being busy are not necessarily the actions that provide results.

There's been two instances for me recently when I've paid attention to hours in particular.
  1. Learning about a Competitiveness Pact Agreement adding 24 additional working hours to 2017 (and onwards) 
  2. Setting up my personal level of Results I'm happy with
The first one, as I became aware of it, was more of a motivation discount. It made me painfully aware how much I hate the idea of working on the clock and how much extra stress I get from having to stamp myself in and out of a working place in new work over reporting all hours manually in last one. The new step makes me ashamed of the hours I put in - as they don't stay contained at the office.

The latter is a thing I seem to be continuously discussing. I would, on a very personal level, like to see that my work has an impact. And I monitor that impact. I'm aware that often more hours into trying to create an impact can give me more of an impact.

As I become aware that I just spent a week in a conference (long-term gain, I hope), I become aware that this investment is away from something else.  I could have driven forward something else. And that something might be short-term and long-term gain combined.

There's no easy formula for me to say when I would be happy with the results I get for the hours I put in. I know my hours alone don't do much unless they are properly synced with the hours of others.

I would like to think I know the difference between doing and results. Not all doing ends up becoming valuable. Some things don't produce. Appearance of busyness isn't a goal. Being busy isn't a goal. Getting awesome things done is the goal. Yet I find that with a little more time invested, it is possible to get more awesome things done. My work is thinking, and I'm thinking without fixed hours.

Is there a few more awesome things I could get done before the year is ending?

Wednesday, December 7, 2016

Tester as a Catalyst

Back in my school days, I was a chemistry geek. And if life had not shown me another path through being allergic to too many things, I would probably be a chemical engineer these days.

It seems very befitting that my past team helped me see that a big value I provide for the team could be best described in terms of chemistry - of me being a catalyst in my teams.

Catalysts don't make things happen that wouldn't happen otherwise. Catalysts speed up the reaction. Catalysts remain when the reaction is done with.  Potential for reaction exits, but sometimes you need a catalyst to get to a speed of noticing visible changes.

I love to look at my contribution with that perspective. The things that happen with a tester like me around are things that could happen otherwise but I speed up the reaction with information that is timely and targeted, and delivered in a way that is consumable. For many of the things, without a tester like me, the reaction doesn't start or is so slow it appears not to happen. A catalyst can be of any color, including completely invisible. I would like to think of myself as more colorful addition.

I'm working through my techniques of being a good catalyst in my teams. Here's some that I start with.

Introduce ideas and see if they stick

I have had the chance of hearing (through reading and conferences) to be introduced to many people's ideas of how the world of software could be better. The ideas build up a model in my head, intertwined with my personal experienced of delightful work conditions with great results that I want to generate again and again. The model is a vision into which I include my perceptions of what could make us better.

Out of all the things I know of, I choose something to drive forward. And I accept that changing is slow. I speak to different people and through different people. I work on persistence. And my secret sauce for finding the patience I'm lacking is externalizing my emotions on counting how many times it takes, writing notes for future me about my "predictions" to hide out of sight and venting to people who are outside the direct influence of the change.

I believe that while I can force people into doing things, them volunteering is a bigger win, leaving us working on the change together. I can do this, as I'm "just a tester" - as much as anyone is just anything.

Positive reinforcement

People are used to getting very little feedback, and I find myself often in a position of seeing a lot of things I could give feedback on. And I do, we talk about bugs, risks and mitigation strategies all the time. But another thing I recognize myself doing is positive reinforcement.

I found words for what I do from Jurgen Appelo. He talks about managers shaping organizations with the best behaviors leaders are willing to amplify. I try to remember to note exceptional, concrete contributions. I try to balance towards the good.

And recently, I've worked more and more on visualizing and openly sharing gratitudes with kudo cards. Not telling just the developer that his piece of code positively surprised me on some specific aspect, but also sharing the same with her manager who has less visibility on the day-to-day work than any of us colleagues in teams.

If I want to see more of something, instead of talking about how it is missing in one, I prefer talking  how it is strong in another.

Volunteering while vulnerable

I find myself often in middle of work I have no clue on how to do. I used to hate being dependent on others, until I realized there are things that wouldn't happen without a "conscience" present.

In mobbing, I could see how the developers look at me and clean up the code, or just test one more thing before calling it done.

It's not what I do. It's what gets done when I'm around. And in particular, it's not about what I can do now, it's about what I can do with people now and what I can grow into.

Amplifying people's voices

A lot of people, not just testers, feel they are not heard. Just like my messages get through best with other people, I can return the favor and amplify their voices.

I can talk to the managers about how it feels to a developer with a great idea to be dismissed, until I repeat the message. Through the feelings discussion, I've had managers pay special attention to people with less power in their communication that I seem to find.

We're stronger together, I find. We have great ideas together. It's not a competition. It's a collaboration to find the diverse experiments to give chances to perspectives that make only sense in retrospect. Emergence is the key. Not everything can be rationalized before.

How to write 180 blog posts in a year

Yesterday, on December 6th - the Finnish Independence Day - I learned that I have apparently written 180 blog posts in 2016. The detail alone surprised me, but the context of how I learned this trivia was even more surprising. I was one of the mentions of my many achievements as I was awarded the MIATPP award - Most Influential Agile Testing Professional Person - at Agile Testing Days 2016 in Potsdam, Germany.

I'm still a little bit in a state of disbelief (influential, me?) and feeling very appreciated with the mentions, likes and retweets on Twitter. But most of all, I get stuck on the idea of 180 blog posts - how did that happen?

With the number coming out, I wanted to share the secret to regular blogging with all of you. I learned it from people who talked to me about my blogging and in particular from Llewellyn Falco.

The secret to regular blogging is not planning for it. Not monitoring how much you're blogging. Using all your available energy around blogging into the action instead of the status.

There's a time for doing and a time for reflection. I apply cadences to reflection. There's the daily cadence of showing up to try things. There's the weekly cadence of seeing progress in small scale. There's the monthly and quarterly cadence of trying to see flow of value I generate. And there's the annual and lifelong cadences of appreciating things I've learned and people I've learned with.

I write for myself. I still surprises me that people read what I write. I'm still "just a tester" believing no one is really just anything. I live and breathe learning, and blogging is just one of the many tools to learn. If I write (or give face to face) awful advice, I can learn from it. If I don't give the advice, if I don't share my experience, I'm one more step away from being able to learn what could really work.

Share to learn. Care to learn. And wonderful things will emerge.