Monday, January 30, 2017

Entrepreneurship on the side

I had a fun conversation with Paul Merrill for his podcast Reflection as a Service. As we were closing the discussion in the post-recording part, something he said lead me to think about entrepreneurship and my take on it.

I've had my own company on the side of regular employment for over ten years. I have not considered myself an entrepreneur, because it has rarely been  my full time work.

I set a company up when I left a consultancy with the intent to become independent. I had been framed as a "senior test consultant" and back then I hated what my role had become. I would show up at various customers that were new to the consultancy, pretending I had time for them knowing that the reality was that on worst of my weeks, I had a different customer for each half a day. Wanting to be a great tester and make great impact in testing, that type of allocation did not feel like I could really do it. I was a mannequin and I quit to walk away from it.

Since I had been in contact with so many customers, I had nowhere to go. According to my competition clause, I couldn't work with any of those companies. They were listed in a separate contract, reminding me of where I can't work. One of the companies back then on the list of no-go was F-Secure, and the consulting I had done for F-Secure was a single Test Process Improvement assessment. F-Secure had a manager willing to fight for their right (my right) for employing me, and just stepping up to say that they vanished from my no-go list and I joined the company for 6-months that turned into three years.

As I was set out to leave in 6 months, we already set up a side work agreement. And in my three years with F-Secure, I started learning what power entrepreneurship on the side could have.

In the years to come, it allowed me a personal budget to do things the company wouldn't have budget for - including meetups and training that my employers weren't investing in for me. It allowed me to travel to #PayToSpeak conferences I could have never afforded without it. Training for money a day here and there were enough to give me the personal budget I was craving for.

I recently saw Michael Bolton tweet this:
I've known I'm self-employed on the side, and it has increased my awareness that everyone really is self-employed. We just choose different frames for various motivations to do so. On the side is a safe way of exploring entrepreneurship.

What's worth repeating?

This is again a tale of two testers, approaching  the same problem with very different ways.

There's this "simple" feature, having more layers than first meets the eye. It's simple because it is conceptually simple. There's a piece of software in one end that writes stuff to a file that gets sent to the other end and shown on a user interface. Yet, it's complicated looking at it from just having spent a day on it.
  • it is not obvious that the piece of software sending is the right version. And it wasn't due to an updating bug.  Insight: test for latest version being available
  • it is not obvious that whatever needs to be written into the file gets written. Insight: test for all intended functionality being implemented
  • it is not obvious that when writing to the file, it gets all the way to the other side. Insight: test for reasons to drop content
  • it is not obvious that on the other side, the information is shown in the right place. Insight: test for mapping what is sent to where it is received and shown
  • it is not obvious that what gets sent gets to the other side in the same format. Insight: test for conversions, e.g. character sets and number precision
  • it is not obvious that if info is right on one case, it isn't hardcoded for that 1st case. Insight: test for values changing (or talk to the dev or read the code)
It took me a day to figure this out (and get the issues fixed) without me implementing any test automation. For automation, this would be a mix of local file verification (catching the file sent on a mock server because manually I can turn off network to keep my files, our automation needs the connection and thus a workaround), bunch of web APIs and a web GUI.

So I look at my list of insights and think: which of these would even be worth repeating? And which of these require the "system" for repeating them and which could just as well be cared for on "unit" perspective. Rather straighforward mapping architecture, yet many components in the scenario. Unlikely to change much but to be extended to some degree. What automation would be useful then if we did not get use of it as we were creating the feature in the first place? 

And again I think there is an overemphasis on system level test automation in the air. Many of these I would recognize from the code if they broke again. Pretty much all but the first. We test too much and review / discuss / collaborate too little.
Can I just say it: I wish we were mob programming.

Tuesday, January 24, 2017

Appreciating Special Programmers

I'm having this phase in my life where I feel like going to conferences to speak isn't really my thing. I don't think it is the infamous imposter syndrome, because there's plenty of stuff for me to share on. While I might have low self esteem in some areas of life, work isn't one of those areas.

So in this moment of crisis, I think of things that have changed for me. I realize one of the things that has changed a lot is how I identify. I remember tweeting with Adi Bolboaca about two years ago who I insist that testers would not be called developers and I can see the irony in now organizing European Testing Conference with Adi and not being able to recall why I would have ever wanted to insist on that, other than fear of losing appreciation for my special skills in testing.

So I keep thinking what (and who) changed my mind, and realizing it has been a group of individuals that never tried changing me.

It all starts with Vladimir Tarasow, who invited me to speak in Latvia for an Exploratory Testing Training and then the Wildcard conference. Wildcard was one of the first mixed role conferences I've been to and Vladimir and his colleagues were first developers I met that cared (enough to act on it) on the community of testers and testing.

Since I was at Wildcard, I participated on sessions. And one of the sessions was full Saturday long Code Retreat, facilitated by Adi Bolboaca.

I loved Code Retreat and could recognize my team would love it too, so Adi came and taught a wonderful day for my two teams programmers. And unlike in the conference where I sat through the day, here I stepped down for feeling insufficient.

These people together with my programmers at work started a learning path in which I appreciated code quality in relation to end user quality in our efforts, and started looking more deeply into ways those two are intertwined.

Adding into the picture was being encouraged to re-learn to program through Mob Programming and Strong-Style Pair Programming, I can't even pinpoint who, when and where changed my mind. I can just recognize it did and find it fascinating.

I think the keys to this change have been:
  • No one tried to "change" me but just allowed safe experiences and discussions where we could agree to disagree, and we did
  • I had free capacity for learning widely over previous choice of deeply into exploratory testing, as every day brings more capacity if you just stick around long enough
  • Other things I wanted (closer human connection at work, not sub optimizing testing but optimizing the product development) required me to do things I wouldn't have otherwise volunteered to do
  • I connected with great people on my way, that I can only properly appreciate in hindsight
So Vladimir, I owe you a beer. Clearly. Thank you. I never realized how many aspects of our paths crossing had a meaning to me. 

Frustrations on system test automation efforts

For a tester who would rather spend her time not automating (just because there's so much more!), I spend a lot of time thinking about test automation. So let's be clear: people who choose not to spend their days in the details of the code might still have relevant understanding on how the details of the code would become better. And in the domain of testing, I'm a domain expert, I can tell the scopes in which assumptions can be tested in to a level I would almost believe them.

Back in my earlier days, I was really frustrated with companies turning great testers into bad test automation developers (that happens, a lot!) and these days, I'm really frustrated with companies turning great testers away and rather hiring test automation developers. Closing one's eyes on what is the multitude of feedback that you might want while developing makes automation easier - yet just not quite the testing one may be imagining. One thing has changed from my earlier days: I no longer think of bad test automation developers as the end of those people, as long as they start treating themselves like programmers and growing in that field. It's more of a career change, benefiting from the old domain knowledge. I still might question, based on my samples, the goodness of domain knowledge of testing on many of the people I've seen make that transition. Becoming a really good exploratory tester is a long road, and often people make the switches rather sooner than later.

Recently, I've been frustrated with test automation specialists with a testing background, who automate in the system / user perspective and refuse to consider that while this is a relevant viewpoint, a less brittle one might involve addressing things in a smaller, more technology-oriented perspective. That unit tests are actually full-fledged tests as an option to keep track of things that should work. That it is ok to test a connected system with a fake connection. And that it just doesn't need to be, when automation is on the plate, a simulation of what a real user would do. Granularity - knowing just what broke is more relevant.

I believe we run our test automation because things change, and as a long-time tester, I care deeply what changed. I recognize the changes that my lovely developers do, and I have brilliant ways of being alerted both by them with a lot of beautiful contextualized discussions but also just seeing from tools what they committed. I read commit comments, I read names of changed files and their locations, and I read code. I recognize changes coming in to our environment from 3rd party components we are in control of and I recognize changes into the environments that we can't really control in any way.

And while our system test automation works against all sources of changes, I prefer to think my lovely developers over my users with the test automation giving feedback. The feedback should be, for the developers, timely and individualized to the change they just introduced. A lot of times I see system test automation where any manual tester does the timely and individualized better than the system created for this purpose.

Things fail for a reason. Build your tests granular to isolate those reasons. 

Monday, January 23, 2017

Testing in the iterative and incremental world

I've run my fair share of sessions where we test together something. My favorite test targets recently have been DarkFunction Editor (making 2D sprite animations), Freemind (mindmapping) and ApprovalTests (golden master test library) but there's a thing that is common to all these three. When I introduce them to my groups for testing, they're static. They don't change while we test. They are not early versions with the first of user stories implemented, to grow much later. They are final releases (until next one comes along).

In projects that I work with, I test a lot of things that are not yet the final releases. And it's almost like a different ballgame to find the right time to give feedback on things. In my experiences, early testing has been crucial for allowing for time to understand what we're building to guide that also from a testing perspective. As we learn in layers, the testers too need time to peel a layer at a time to be deep by time of final rounds. But it has also been crucial to fix issues before they become widely spread assumptions that can't be questioned without the whole brick structure falling down on us.

Some years ago, a session I was running was playing with this dynamic. I gave a group of testers the scope of product backlog (stories) for 1st increment and asked them to plan their test ideas. Usually very little came out. I gave then the 2nd increment with very similar results. Then I fast-forwarded 10 sprints to a close to ready game, and I got a long list of things to consider. The point of the session was to show that thinking in the ready state is easier, but having done that, you can categorize then your ideas to figure out how early you could run tests on some of these.

I think it is time for me to experiment with three different new sessions. 
  1. Incremental test planning/design - bring back and improved version of something I have not paid attention to for years. 
  2. Incremental exploratory testing - figure out a way of running a course where the test target is not static but grows incrementally
  3. Test idea creativity - while executing and generating ideas now come for me intertwined (curse of knowledge), looking around me I realize that the creativity part of it could use more focus. 
The first is easy, so I'll just schedule a trial run for my local community. The two others take a bit more processing, and for #3 I think I might know just the perfect place for it - a developer conference. 

Thursday, January 19, 2017

Re-testing without the tester

Some days we find these gems we, the testers, like to call bugs. Amongst all kinds of information, bugs are often things we treasure in particular. And we treasure them by making sure they get properly communicated, their priority understood and when they're particularly valuable, reacted on with a fix.

We're often taught how the bug reports we write are our fingerprints and how they set our reputation. And that when something relevant was found and fixed, the relevant thing is worth testing again when a fix is available to see that the problem actually has gone away.

We call this testing again, the precisely same thing as we reported the bug on, re-testing. And it's one of the first things we usually teach to new people that there is a difference in re-testing (precise steps) and regression testing (risk around the change made).

Today I got to hear something I recognize having said or felt many times. A mention of frustration: "they marked the bug closed as tested, and it turns out they only checked that a big visible error message had vanished, but not the actual functionality".

This was of course made even more relevant with external stakeholders coming back with the feedback that something had indeed been missed.

What surprised me though was the quickness of my reaction to mention that it was not *just the tester* who had failed to retest the fix. It was also the programmer who did the fix, who had completely closed their eyes on the actual success of whatever change they did. And to me, that is just something I want to see different.

It reminded me on how much effort I've put on teaching my close developers that I will let fixes pass into production without testing them - unless they specifically ask me to help because they are concerned of the side effects or don't have access to the right configuration that I would have.

Re-testing needs to happen, but re-testing by a tester is one of these relics I'd rather see more careful consideration when it is done. 

Two people who get me to do stuff

Sometimes, I feel like I'm a master of procrastination. For some types of tasks (usually ones requiring me to do something small by myself that has little dependency or relevance to other people) just seem much bigger than they realistically should. I wanted to make note of two people I respect and look up to for making me do things I don't seem to get done.

'I'll just sit here until it's done'

There's a team next door here, that works on the same system but we could easily organize our work so that we don't share that much. I had decided however I wanted to try out running their test automation, maybe even extending that when things I want to test benefit from what they've built. And I got the usual mention: there's instructions, just three steps. So I went and followed the instructions, only to be there in (typically) unlucky day when they had changed everything except their instructions while upgrading their test runner.

So a day later, I hear they've improved the instructions and we're back to just three steps. As I work on something else, I don't really find the energy to go back and see how things are. I gave it change, it did not work out, not like I really need it anyway. So my favorite colleague from that team comes into my team room, with his laptop and sits on the corner of my desk saying: 'Try it. I'll just sit here until it's done'. And I try it, and five minutes later we have delightful discussions on my feedback on making sense of this as someone new.

Thinking back to this, I realize this is a tool he uses all the time. Actively deciding something needs to be done and committing his time to insert positive pressure by just being there. Setting an expectation, and making himself available.

'Let's pair'

Another person takes it further. They volunteers to pair and actively schedules their time to get more out of the shared work. Sometimes their 'Let's pair' attitude feels like pushy, but the results tend to be great. It takes time to get used to the idea that someone is there with you on you doing something you know you sort of could do by yourself.

As one of the organizers for European Testing Conference, they have paired with every one of us. The pairing has both supported timely doing of things, but also created contents we wouldn't create without pairing. On the other hand, it also created schism when the style of pairing was a bad fit.

There was a task that I needed to do, and I was trying to find time in my busy schedules to do it. With him proclaiming 'Let's pair on it', it got done. And while I was sure I had the best skills for the task, I was again reminded on the power of another person on identifying things I could be missing.

From Envy to Stretching

I find it extremely hard and energy consuming to force myself on people who are not actively and clearly inviting my participation. So I envy people who, with a positive attitude just go and do it, like these two people. Recognizing the envy gives me a personal stretch goal. Try it, do more of it, find your own style.

It's not about doing what they do, but knowing if doing what they do would help you in situations you experience. 

Wednesday, January 18, 2017

A day in the life of bug report avoider

I'm doing really well on my "No Jira" project at work. Four months in, and I've so far been forced to write one bug report in Jira, even if there's been many many things I've helped identify, address and fix.

Today is one of my particularly proud days. I was testing this new thing that the developer just introduced as soon as it was declared available. And as I was puzzled and did not seem to quite understand enough to get the response out of the software that I wanted, I immediately asked him. He walked up to my desk from the other corner of the room, was just as puzzled as me until he told me to check the version number. The number does not (yet) tell much to me but made him realize that I was on the wrong version. And a minute later, he realized his build job had failed without letting him know about it.

So I get the working version, and my main concern is these five similar switches that are not really similar. They're on/off, and one of the five is a master switch to three and the combinations are pretty hard to grasp. So I turn them all off, and then just try turning the master switch on. And I end up with confusion, again calling the dev to show me the basics. This time he is puzzled a little longer. It worked for him. It does not seem to work for me. It takes a while before he realizes that of course I have got the combination in that causes the most confusion. I thank him for help and watch him say five minutes later: "I will change that master switch. It can't work that way."

I put a lot of personal energy on trying to figure out not just how to find the problems, but how to create experiences around those problems that make people want to fix them. Jira (or similar bug tracking tools) were long in my way for this.

But I realize I can do this because I have no fear where I work, whether the fear is real or perceived. I trust that I can drive things in ways that I believe make things better. I don't feel the need of leaving a track of bug reports to show my work. And I'm grateful to be in this position as it sets us up, together as a team, better for success. 

Tuesday, January 17, 2017

Step Up to the Whiteboard

You hear an idea somewhere and realize it's something you could do but aren't doing. You decide to try it out, but feel awkward at first on changing the dynamic of how you work - after all people are used to you filling your role in a particular way. But over time, you get the courage. And when you get started, you don't want to stop because it gives you more than you could have anticipated.

I've had several of these experiences. One of the very valuable ones coming from a session in some agile conference I can't even pinpoint to correctly attribute is the use of whiteboard. I believe it is one of those things that I kept seeing over and over again without anyone particularly emphasizing, and at some point it became the thing I needed to try.

The transformation of how I am in meetings has been quite significant. From the person on the computer making detailed notes (and often ending up as the meeting secretary), I've moved my role to be the person on the whiteboard, making conceptual notes. And where in my written detailed notes I never got feedback if we understood things the same way, my notes on the whiteboard invite corrections, additions and depth. The visual nature of the boxes and arrows helps us discuss things, remember things and go deeper into the things.

My phone is full of pictures that remind me of discussions in the past, and they work better than any of the detailed notes I used to have.

The big part for me was to dare to change myself and try out something I wasn't sure if I knew how to do. In doing, I discovered that the best way to learn to do it is doing it.

And every meeting, I step up to the whiteboard. It helps me keep track where we are in discussions. It makes the people changing topics in middle less confusing, as the image on the board must change. And it makes sure I'm not alone with my confusion.

Sunday, January 15, 2017

The Overwhelmingly Helpful Comments

I'm going through a bit of an emotional flashback for a discussion I saw on twitter and tried to dismiss. Did not succeed to well with the dismissing, so I'm blogging to offload.

One of the 49 % of the 454 people (woman) I follow on twitter posted her Selenium code sample from a short training she had just delivered. And two of the 51 % of the 454 people (man) I follow on twitter posted comments criticizing her code.

I truly believe these comments of critique were made in good faith, and with the intention to help improve. Both comments introduced concepts that were missing from code ('page object factory', 'java 8 features like lambda') and you could even assume the authors knew there could be a reason things are excluded even if they could be there too.

What brought me to blogging is that it took me back to the time when I started coding decades ago, and when I stopped coding for precisely these kinds of lovely, helpful people who were suffocating me.

Talking in metaphors

I love dancing but I'm not a dancer. When I go to dancing lessons, on the basic level my teachers usually correct only the most relevant things, and a lot of times, they don't correct anything. They let me be surrounded with the joy of dancing, encourage continued practice without critique.

They could also do things differently. They could start right away telling me to pay attention to my dancing position. But they could also point out continuously everything that I could do differently and better. I could hear of my facial expressions (smile!), the positioning of every part of my body and the fact that you know, there's all these subtle differences to rhythms I'm not yet ready to pay attention to.

The joy comes first. And the other stuff comes layered. And sometimes, the feedback just sucks the joy out of dancing because that's all I want to do.

Being a Woman who Codes

When you are minority, the positive and helpful people around you tend to all want to pitch in to the feedback. The style of comments may be very constructive, but the amount of it can be overwhelming. You see that the amounts are overwhelming just for you because they just don't care as much for the others. The others are lucky if they get helped, but you get helped by everyone.

Everyone looks at what you do in a little more detail. Everyone wants to help you succeed with their feedback. And then there's someone, usually a very small minority, who uses all the feedback you're getting that others don't as evidence that 'women are not meant for coding'.

In projects with crappy code from everyone else, I always felt the feedback was asking me to be more perfect. Good intentions turn into sucking the joy out of the whole thing. I dropped coding for 20 years. And even as I've come back, I'm still overly sensitive to being helped in overwhelming amounts.

The environment matters

Recently, experiencing projects where pull requests get, regardless of gender, criticized and improved in detail are places I find safe again. It's not special treatment, it's feedback for everyone. And it comes from a place of putting every line to production pretty much as soon as it gets committed to the main branch.

Back to the twitter incident

So with the twitter incident of commenting in particular for this piece of code, I would ask:

  • Are these same people giving same attention to every other public speaker's code?
Selective helping is one of the things I've experienced that drove me away from coding. I can't speak for anyone else, but I surely know that at a younger age, it made a difference to me. I would not be back without (strong-style) pairing and mobbing. 

Saturday, January 14, 2017

Thinking in Scopes

The system I'm testing these days is very much a multi-team effort and as an exploratory tester looking particularly into how well our Windows Client works, I find myself often in between all of these teams. I don't really care if works as designed on my component, if the other components are out of synch failing to provide end users the value that was expected. 

Working in this, I've started to experience that my stance is more of a rare one. It would appear that most people look very much at the components they are creating, the features they assign to those components and the dependencies upstream or downstream that they recognize. But exploring is all about discovering things I don't necessary recognize, so confirming and feature focus won't really work for me. 

To cope with a big multi-team system, I place my main focus on the two end points that users see. There is a web GUI for management purposes, and there's a local windows client. And a lot of things in between, depending on what functionality I have in mind. As an exploratory tester, while I care most for the end-to-end experience, I also care in ways I can make things fail faster with all the components on the way and I have control over all the pieces in between. 

I find that the decomposition of things into pieces while caring for the whole chain may not be as common as I'd like it to be amongst my peers. And in particular, amongst my peers who have chosen to pay attention to test automation, from a manual system tester background.

Like me, they care for end to end, but whatever they do, they want to do in means of automation. They build hugely complicated scripts to do very basic things on the client, and are inclined to build hugely complicated scripts to do very basic things on the web ui - a true end-to-end, automated. 

There's this almost funny thing for automation that while I'm happy to find problems exploring and then pinpoint them into the right piece, I feel the automation fails if it can't do a better job at pinpointing where the problem is in the first place. It's not just a replacement of what could be done manually while testing, it's also a replacement for the work to do after it fails. Granularity matters. 

For automation purposes, decomposing the system into smaller chains responsible for particular functionality gets more important. 

I drew a picture of my puzzle.

Number 6 is true end-to-end: doing something on a windows client 'like a user', and verifying things on the the web guide 'like a user'. Right now I'm thinking we should have no automated tests in this scope.

Number 1 is almost end to end, because the Web GUI is very thin. Doing something on the windows client and verifying on the same rest services that serve the GUI. This is my team's system automation favored perspective, to an extent that I'm still struggling to introduce any other scopes. When these fails (and that is often), we talk about figuring out in the scope of about 10 teams. 

Number 2 is the backend system ownership team's favored testing scope. Simulating the windows client by pushing in the simulated messages in from one REST API and seeing them come out transformed from another REST API. It gives a wide variety of control through simulating all the weird things the client might be sending. 

Number 5 is something the backend system ownership team has had in the past. It takes REST API as a point of entry simulating the windows client, but verifying end user perspective with the Web GUI. We're actively lowering the number of these tests, as experimenting with them shows they tend to find same problems as REST to REST but be significantly slower and more brittle. 

I'm trying hard right now to introduce scopes 3 and 4. Scope 3 would include tests that verify what ever the windows client is generating against what ever the backend system ownership team is expecting as per their simulated data. Scope 4 would be system testing just on the windows system. 

The scopes were always there. They are relevant when exploring. They are just as relevant (if not more relevant) when automating. 

The preference to the whole system scope is puzzling me. I think it is learned in the years as "manual system tester" later turned into "system test automation specialist". Decomposing requires deeper understanding of what and how gets built. But it creates a lot better automation. 

Telling me there are unit tests, integration tests and system tests just isn't helpful. We need the scopes. Thinking in scopes is important. 

Friday, January 13, 2017

Overnight Changes

There is a discussion that I keep going back to, begging to be unloaded off my mind. This morning I said the words: "It's like I joined a different project this week". There's been a sudden change in the atmosphere and in the things we do, and thinking back the last week makes me realize some changes can be really fast.

In a week, my team transformed from a team working on its own components to a team that works with other teams on shared goals. We transformed from a team that seeks product owner acceptance and prioritization into a team that checks priorities but works actively to identify the next steps with one another. And we changed from a team that was quiet and not sharing, into a team that talks and plans together.

I can see three changes in the short timeframe:

  1. We did our first end-to-end demo across two teams and it resulted in a lot of positive reinforcement of customer value over team output. 
  2. Our product owner moved out of the team room and took a step back leaving more room for the team to decide on things. 
  3. A new-old developer joined the team. 
We weren't bad before, but this week has been amazing. We've accomplished a lot. We've learned a lot. 

Experiences this week remind me again on how changes in the environment change the system. And I'm delighted to be in a place that is willing to play with the environment to find even better ways to work together. 

The Expensive Fear of Forgetting

I sat through two meetings today that leave me thinking about product backlogs.

In the first one, we took a theme ('epic') and as a group brainstormed adding post-it notes to describe what would be needed, what would be needed first and what would be needed in general. The discussions provided a lot of shared understanding and clarity, and helped us identify a shared idea of how we are trying to prioritize things for value and risk. At the end there was a pile of post-its we had had the discussion around. I felt the meeting had been really good until someone said: "Now, let's take all these post-its and put them to Jira". I shrugged the unease off, let my mind relax and realized something about priorities around the principles we had grown to understand that again changed the overall plan. At this point the unease turned into frustration. If someone did take the "plan" to Jira, now someone needed to go and change the plan. Couldn't the shared understanding and the next step to work on be enough over the whole plan?

The second meeting was one clarifying a feature ('story') we had just pulled up as a thing to work on. The meeting focus was on identifying acceptance criteria, and again the discussions around the item helped us create a shared understanding, identify work to do between various parties and introduce people working on this to one another. The moment of unease happened again at the end as someone said: "now we need to go add all individual tasks to Jira and put the estimates in place". My team does not do estimates, we work with post-it notes on the wall and are doing pretty well with our Jira avoidance, taking discussions away from the writing and into the richer media.

Instead of improving the backlog practices, I work with my team to improve our collaboration and discovery, shared understanding of priorities and ability to release. Instead of asking "how long will it take", I work with them to figure out if there was a way we could deliver something smaller of value, first. And it is clear: in doing the work, we discover the work that needs doing. We need to focus on doing more of the next valuable thing, over creating a longer term view or details of promises in electronic format.

Sometimes, we are so afraid that we are forgetting, that we are ready to both invest in maintaining our lists (what a waste, in my experience) but also making our work shape so that there's less maintenance with less learning. Discovery is critical, and we pay high, hidden price when we create ways of working that don't encourage that in full.

Yes is the right answer when someone asks for help

Working in agile projects, we tend to write a little less documentation. And working in a big project, whatever documentation we write, it tends to be dispersed.

Four months into the new job, I'm still learning to work my way around doing things and figuring things out. I'm happy for my little tools of finding the dozens of code repos that build up the product I'm testing, but there's a lot going on I just have chosen to not pay attention to. Quite often there's this feeling of being overwhelmed with all the new information, as by no means we stopped making changes since I joined.

In the past, I remember solving issues of documentation with two main ideas:
  • Draw on request. Whenever someone would want to understand our current system, anyone in the team could go on a whiteboard, draw and explain. 
  • Write on repeated requests. When same info is asked that does not completely change as we are learning, write instructions on the wiki. 
They are still relatively good approaches, except...

Yesterday, I was overwhelmed with many different directions of work and there was one particular thing I needed to learn to do: get started on testing against a REST API.

Some weeks back I had taken my first go at it, and postponed the work for missing information about some needed credentials. So this time I decided to approach it differently. I went and talked to a colleague, asking if he would join me to get one post working on my machine. But no.

I got an (outdated) wiki page describing content rules, but lacking the credentials I was unaware of.

I got a (not working) exported Postman script.

I've been thinking about this ever since. When someone comes talk to you and asks for help as in doing something together that you know well, the right answer would be yes, or yes, in two hours.  Not "here's the document".

I eventually got it working with the documents. But I'm now realizing that the feeling of being left alone is overwhelmingly more important than the fact that there was pieces of documentation that were eventually pointed out.

I miss more of a human connection than "create a pull request and someone will review it". How about us working together, really *together* for a change?

I guess I did not know to miss this before I had experienced Mob Programming. But now the individualistic attitudes make me painfully aware how things could be better.

Saturday, January 7, 2017

Why setting out to automate tests is a bad idea

On Thursday at work, a colleague was doing a presentation I had invited, on how they've been automating their tests. Organizing sharing sessions comes naturally, both from me being curious and knowing where to find all the best stories, but also from creating an atmosphere of sharing and learning.

As his story is starting, he tells us he needs to explain a few things first. He spends maybe 30 seconds on explaining why finding a way to automate was so needed (malware evolves fast and when you're responding to something like that, you will need to evolve fast too). But then, he spends 20 minutes talking about things most people in the room, identifying as quality engineers, have never done. He speaks of recognizing problems with being able to test, and finding the best possible programmatic solution.

He talked on how they introduced blue-red deployments within the product (without even knowing it was a thing outside windows client software) and how that solved all sorts of problems with files being locked. He shared how they changed, bit by bit, the technical designs so that the whole installation is rebootless because it was just hard to automate stuff that would need to continue after reboot. Example by example, his story emerges: to automate testing, they needed to fix testability. And that just adding tests when you have big problems that are hard to go around when you can change the product makes little sense.

The story makes it clear: to be effective in this style of testing, you should be able to program outside of the tests you're programming, and if you can't, team up with someone who can. Without the view of solving problems programmatically where they make the most sense (design vs. tests), you would be on a path to difficulties.

For a room for of test automators who barely look into the application code, his message may have been intimidating. Setting out to automate test (as in this is what I want to test, designs don't change) is often an invitation to trouble.

Make it first simple to test, then a simple test to test it. The first is much harder. And I find that most of the repurposed manual testers becoming test automators without caring for product structures to make "manual" testing easier are hitting this trap harder than exploratory testers who have been working with the friends with pickup trucks (programmers) all along.

Monday, January 2, 2017

Normalizing Learning

I remember some years ago when I heard about a new thing that was going on and getting some buzz around the tester universe: Weekend Testing.

The idea is simple and beautiful. Volunteers would dedicate some time to facilitate practice sessions on testing over Skype and anyone could join. The sessions, as the name says would take place on weekends - off time from work. The sessions would be a place to see how other testers approach a particular problem. And if you missed a session, a transcript of the writing that was going on would be published for you to read.

I absolutely hated the idea. Not because the idea of practicing over Skype, but the built-in cultural experience that said to me:
Testers are not important, if they want to learn they need to do so on their own time. Learning is not part of work hours.
I was so against the notion that I did not join any of the weekend testing sessions (until I ended up facilitating for for Weekend Testing Europe a little over year ago). Instead, I would put energy on organizing half of my meetups during office hours to learn that in Finland companies do let people join in the middle of the day and in particular in the mornings.

I remembered this because I listened to Ajay's CAST keynote and  learned how he would work (+ travel for work) from 8 am to 7 pm, and then work on learning from 7 pm to 1 am. And how he, after hard work of 17 years (!!) finally was delighted to do his 1st international keynote, something he had aspired for since doing a local talk on 9th grade.

My hours probably look only a little better, but the underlying cause I work for is to find means to normalize learning. When I am at work, every day I can take an hour to do things *differently* than usual, and that teaches me a lot. I can stop to reflect instead of just steaming through an assignment. I can read or listen to a talk. I can volunteer to do tasks I'm not assigned to, even tasks where people say they are "not part of my job description". And I can find a meetup where I can hear how bad things are elsewhere so that I remember to appreciate how amazing places to work I have managed to end up in.

Learning is the key. But instead of externalizing learning to one's own time, it needs to be normal to learn while working. Even when we are ambitious and find it hard to invest just the regular hours for our 'work' - including the learning.

Sunday, January 1, 2017

2016 - what a year

I'm a big fan of reflection cadence, and it's time for one of those moments dedicated to acknowledging, primarily to myself, what I've made out of yet another year.

Changing the World of Conferences

February 2016 I introduced this tagline while opening the inaugural European Testing Conference. ETC is a platform for change.

We're changing from testers  to testing and bringing together testers and programmers to talk about ways they do testing, with practical, cross-discipline perspective.

We're changing competing for speaking slot based on a written description and reputation to oral explanation of your idea and experiences. 

We're changing from thinking free entry is enough of compensation for speaking to paying speakers expenses and profit-sharing with the speakers.

We're changing from thinking of our conference to think of conferences in large.

  • We paid out 30 speaker's expenses in full, and allocated 'shares' to the profits. A 30 minute talk was one share, and in 2016 edition paid out 160 euros. 
  • Two conferences that have previously not paid speakers expenses now are - and I like to think the awareness on this topic may have played a small part in Nordic Testing Days and Copenhagen Context changing their policies to no longer make people pay to speak. 
  • We paid travel scholarship for Mirjana Kolarov to deliver a well-received talk as a Speak Easy speaker in EuroSTAR 2016. 
  • We selected all speakers for European Testing Conference 2017 based on Skype calls with the speakers, building the most diverse set of lessons we could. I feel honored to have had a chance to talk to so many amazing aspiring speakers in the process. 
  • I mentored about 10 people to start or improve their conference speaking careers, some with SpeakEasy, some with direct contacts.
  • I'm closer than ever with some developers and delighted on the number of mentions of how they see the difference in the 'testing I do vs. testing they do' and feel I have won over some developers to appreciate that exploratory testing is something different. 
Change of Focus in Full-Time Employment

In September 2016, I changed jobs. From a full-time tester in a small team, I changed into a full-time tester in a small team amongst many teams introducing new scale to the challenges I'm working with. 

Simultaneously, my speaking engagements changed from private time except for one week a year to work time. That is a big change, as with the amount of speaking I've been doing, I've worked very long days. This serves to remind you that not all companies see speaking as part of your work. 

Now that my company allows me speak as part of my work, I also have a job that requires more of my attention naturally creating me the desire to not leave the office for speaking engagements. With the company slogan being 'Seeing things other's don't', I feel like a great match with my skills in exploratory testing in the realm of security software. 

I tried cutting down my speaking in 2016 by not submitting to conferences. I ended up failing with my goal of cutting down though, as my long term-aspiration of being invited (to keynote in particular) started realizing.

  • I helped select (through sampling candidates in videotaped pair testing session) my successor and spent a day training through mob testing the new employee + a developer after I had started my new job.
  • I found a new job that challenges me again continuously, forcing me to learn new approaches and skills, and supports my need of self-organization. I passed another job I almost took and learned that 'losing one opportunity only opens another opportunity'. 
  • With larger number of colleagues in new company, I organized hour of code for employee's kids ages 7-12 and had 30 kids join. 
  • Becoming someone who automates and takes test automation forward (without turning off exploratory testing mindset). 

I spoke a lot in conferences and meetups. For a year of trying to cut down my speaking, I failed. The numbers add up quickly as each talk I do internationally tends to get a local practice round. 

In 2016, my stretch goal was to propose same topic for many conferences. In past years, I've considered it my signature that I speak of different topics always to surprise the few people traveling alongside me. I did not quite cut it down to having a real signature talk (still searching if there's a topic like that for me) but I narrowed down the amount of topics and practiced (to a personal stretch) delivering same talk multiple times to see if they grow that way. 

  • Co-taught with Maaike Brinkhoff to gain new hope for a group of explorers refining our craft as it is in close collaboration with developers in Agile teams
  • Delivered 29 talks, out of which 4 are keynotes and 2 are webinars. 
  • Talked of 15 different topics, and getting to 6 repetions with one topic and to 5 with another. With 2015 having 22 different topics for 33 talks, my attempts to "cut down" realized just in a different form that I might have originally imagined.
  • Spoke at 2 non-testing conferences on non-testing topics: DevOxx in UK on learning programming through osmosis and Agile in USA on pair programming 
  • Did 5 podcasts that came out in 2016 and one that comes out 2017 split into 5 episodes.

I managed to do very little progress on my pair writing project on the Mob Programming Guidebook, as finding shared working time proved to be even more challenging when every sentence is written as strong-style pairing. 

I wanted to wrote another book, one on Exploratory Testing encouraged by my great experience with LeanPub. Huib Schoots was kind to pass to me the book name on Leanpub (as he had reserved it), and now writing the book is my big 2017 goal. That, or passing the name again forward. 

I wrote some more as in articles that could be part of my upcoming book. I'm very happy with my two articles with Ministry of Testing (who pays for articles, except I never invoiced for these). 

And I blogged whenever I felt like I want to remember what I'm thinking later. 

  • Mob Programming Guidebook got up to 454 readers with 133 having voluntarily paid for the book
  • Published four articles and wrote a fifth in September that will come out in January 2017. 
  • Blogged without thinking about it ending up with 201 blog posts published in 2016. Also, getting two mentions (of honor!) in AB Testing Podcast by Alan Page was a definite highlight.
  • My Blog hit 361622 page views by end of 2016 and my Twitter follower base hit 2964 followers.

I met too many people to remember to appropriately appreciate them all. But I wanted to highlight a few that made a difference for me.

  • Co-teaching with Maaike Brinkhoff was absolutely wonderful. Getting to hear her speak at Agile Testing Days 2016 just added to my admiration of her. She gave me a relevant reminder of how it is possible to mutually look up to others and how we feel more distant before we find ways of collaborating.
  • Adding powerful women of color into my network of people I recognize (and consider friends) shouldn't be worth a mention, but it is to me. I would thank my conference trips to US on seeing how closed my circles have been, and I'm delighted to know Ash Coleman and Angie Jones.
  • The year gave me more chances than before to connect with Richard Bradshaw and I feel I have a lot in common with him. Though him and Rosie, I feel connected with Ministry of Testing and a lot of times find myself thinking of ways to pitch in to making that community more awesome on my part.  
  • Anna Royzman organized an awesome conference, had the courage to invite me as her keynote speaker and went through quite a mess with things that happened. Anna was also with me in a Women Speaker's Mastermind group facilitated by Deb Hartmann, the group that gave me a lot of food for thought on what my speaking goals are and how speakers in general find their signature talks and differentiate from others with similar experiences. 
  • Many people I 'know from twitter' became real people - too many to list. Thank you all for coming to talk to me in conferences while I'm exceedingly battling my social anxiety of connecting with strangers. You may not even realize how much it means to me that you take steps in introducing yourself. 

I'm still a serial organizer, helping run non-profits and programs. 

  • I helped organize 1st ever Agile Coaching Camp Finland within Agile Finland and learned valuable lessons of taking too much on my plate.
  • I co-organized European Testing Conference 2016 and got a good start on 2017, as the conference is in early February.
  • I started seeing women's faces in Tech Excellence Finland meetup that I'm organizing. Set up 5 meetup sessions and admired how fluently Llewellyn Falco organized 4. 
  • Got re-elected for Agile Finland ry Executive Committee for autumn 2016 - 2017 period, with commitment to take forward software team-level agile practices. 
  • Organized 2 webinars under flag of TestGems with Ministry of Testing for new voices and stories. 
Being awarded

  • Highlight of my busy year was peer recognition I received at Agile Testing Days 2016, being selected 'Most Influential Agile Testing Professional Person 2016'.