Saturday, December 30, 2017

Finding a Bug I wasn't looking for

Some years ago, I was working as a test manager on a project where I was considered "too valuable to test". So I tested in secret, to be able to guide the people that were not as "valuable" as I was, to make them truly valuable.

We were an acceptance testing group on customer side, and the thing we were building would take years and years before there was all the end to end layers to use it like an end user would. A lot of the testing got postponed as there was no GUI - leaving us thin on resourcing early on. There were multiple contractors building the different layers, and a lot of self-inflicted complexity.

The core of the system was a calculation engine, a set of web services sitting somewhere. With the little effort and weird constraints on my time, I still managed to set up up to test something while it had no UI.

We used SoapUI to send and receive messages. The freebie version back then did not have a nice "fill in the blanks" approach like the pro one, and it scared the hell out of some of my then colleagues. So we practiced in layers, putting values in excel sheets and then filling the values back into the messages. As my  group learned to recognize that amongst all the extra technical cruft was values they cared deeply for and concepts they could understand better than any of the developers, we moved to working with the raw messages.

In particular, I remember one of my early days of trying to figure out the system of thousands of pages of specification with using it. I could figure out that there were three fields that needed filling as compulsory, the other stuff was all sorts of overrides. So I took a listing of a set of those three things in some thousands, and parametrized to send thousands of messages, saving responses on my hard drive.

I did not really know what I would be looking for, but I was curious of the output. I opened one in Notepad++, and skimmed through thousands of lines of response. There was no way I would know if this was right or wrong. I got caught up seeing error codes, and made little post-it notes categorizing what I was seeing. I repeated this with another message, and felt desperate. So out of a whim, I opened all the messages I had and started searching codes I had on my notes across all the messages.

The first code I was searching for was something that I conceptually understood that it shouldn't be that common. Yet, 90 % of the messages I had included that code. I checked with a business expert, and indeed my layman understanding was correct - the system was broken in a significant way if this code was this common. It meant lots of manual work for a system that was intended to automate decisions in the millions.

By playing around to understand when told not to, I found a bug I wasn't looking for. But one that was a no go in a system like that.

My greatest regret is that I spent time in the management layers, fighting in their terms. With the skills I have as a tester, I would have won out the fight for my organization if I just tested. Even when told not to. I was too valuable not to test.

This experience made sure I would again find places to work that did not consider the most expensive tester someone who wasn't allowed to test. And I've been in right kind of organizations, making a difference ever since. 

Sunday, December 17, 2017

Kaizen on Test Strategies

I just saw a colleague changing jobs and starting to talk on test strategies. As I followed their writings, my own experience started to highlight. I realized I am no longer working on visible test strategies - the last one I created was to start my second to last job and it did not prove that valuable.

When I say test strategy, I mean the ideas that guide our testing. Making those visible. Assessing risk and choosing our test approaches appropriately.

In the past, making a strategy was a distinguishable effort. It usually resulted in either a document or a set of slides. It guided not only my work, but supposedly the whole project. It was the guideline that helped everyone make choices towards the same goals.

Thinking of the strategy and specifics of a particular project was distinguishable effort while I was still doing projects. With agile and continuous delivery, there is no project, just flow of value in a frame of improving excellence. When I joined new organizations that had no projects, my introduction to coming to "improve / lead the testing efforts" triggered me to the strategy considerations. So what is different with my most recent effort, other than the lazy explanation of me not being diligent enough?

I approach my current efforts with the idea that they have been successful before me, and they will remain successful with me. I no longer need to start with the assumption that everything is wrong and needs to be set right. Even if it was wrong, I assume people can't change fast without pain, so I approach it with a Kaizen attitude - small continuous improvement over time, nudging a little here and there and looking at where we are and where I would like us to find our way.

Nowadays, a selection of visions of what good testing looks like resides in my head. I talk about that, with titles like "modern testing", "modern agile" and "examples of what awesome looks like". I don't talk about it to align others to it, I talk to allow people visibility to my inner world, for me to learn on what they are ready to accept and what not.

All the work with testing strategy looks very tactical. Asking people to focus here or there. Having a mob testing session to reveal types of information we miss now in general. Showing skills, tools. Driving forward the respect of an exploratory tester but also the patient building of test automation system that does better as per what I understand better to be.

Looking back, I remember (and can show you) many of the test strategy documents I've created. None of them has been as effective as the way I lead testing, with Kaizen in mind, for the last five years. 

Saturday, December 16, 2017

But women did not submit

Sometimes some women have energy to go and mention in twitter when they see conferences with all male keynotes or all male lineup. Most of the time, we notice but choose not to go into the attacks that result from pointing it out.

I'm feeling selectively energetic, and thus I'm not addressing directly the particular conference that triggered me in writing, but the underlying issue of how the conferences tend to respond.

The most common response is: they tried, but women did not submit.

I don't think they tried enough.
I believe we should, in conferences, model the world as we want it. We should have half women half men. And with a lineups usually going up to 100 people, it is not hard to find 50 awesome women a year to speak on relevant topics. The audience would get amazing experience and learning. The one threatened by this proposal is the 40 men that in current setup get to speak, and with my proposal have to queue in to another event.

Instead of calling for proposals and choosing thinking of equality, perhaps we should be choosing based on equity. And if we did, perhaps we did not have to "fake it" long before we "make it.


The pool of awesome women speakers, in my experience, grows when potential women participating in conferences see people they can relate to and they feel they can do it too. We've done a lot in this respect, with SpeakEasy adding support after initial spark, in the last few years and it shows on some conferences. 

Here's a thought experiment I played through for the time when women are still not equally available. Let's assume I want to speak and I can invest 100 work units into speaking. I can invest this time in different ways:

Respond to CFPs trying to vary contents.
Each CFP process takes 10 units of work, and each talk takes 20 units of work. Varying the contents so that my talk would fit an invisible hole in  the program is a lot of work: if my talk is on How use of Amazon Lambdas Changes Testing, there could be 10 others with the same fashionable topic. If my talk is on Security Testing, maybe this is too much of a niche to be given space for at this conference. If my talk is on the hands-on experiences in testing machine learning systems, maybe the keynote speaker already fills the slot for discussing machine learning. 

So let's assume I approach this as equal player in the field, and I want to get to the conferences. I want my voice out there. It's embarrassing to have to say no when you get accepted, so I might choose to play my chances so that I could have a chance for two (saving 2*20 units of work) and thus I get to submit to 6 conferences.

Wait to be invited
If someone else carries the load of 10 work units and finds you, invites you and negotiates on exactly the topic that would fit the program, you save a lot of work. The 100 units allow for 5 talks instead of 2, making this person more available in conferences. This is the way to create equity while we need it. 

So, when conferences say that women did not submit, they're actually saying:
  • the women who submitted did not choose to bet their time on us but went elsewhere
  • we did not do enough to get women to be considered for the program
  • we believe in treating everyone the same (equality), regardless if it being an approach that enforces status quo
Having good proportion of women is good business too. The contents are more representative, and speak to a wider audience. 

And, on towards intersectionality. It's easy to count this on binary gender, but that is not the diversity we look for. We want to see diversity of ethnicity, the whole spectrum of gender and whatever minorities we are not getting the changes to learn from. 






Long Delay in Feedback

A year ago, we created a new feature that I tested diligently, loving every moment of it. Yesterday, almost a year later, we received feedback that the "works as designed" isn't quite good enough for the purposes of a type of customer. I looked at the email, frustrated as it outlined concerns I had raised a year ago without reaction. When the response email from a developer mentioned "we had not thought of this scenario", I bit my lip not to correct. Correcting isn't helpful.

I would love to speak in specifics, but I can't. No one is telling me what to write and what not, but I sense things myself. When creating features around security, the less people know of the inherent designs the safer our clients are. But I can speak on concepts. And the concept of what I tested and what information got dismissed delivered by me but accepted by the customers is relevant. 

We work in an agile/devops fashion. The feature got built increment by increment, and included numerous releases before it was officially out there. Each increment, we would talk in the team of the thing we would be adding. It was always natural and fluent to test everything that got mentioned as functionalities to add. It was also evident to test error cases with the functionalities we added. Equally, it was evident to test those functionalities for security, performance, reliability and ease of use. The feature was built on a windows service, and testing the integrated system was evident. 

What was not evident is the testing of other similar features integrating with the same windows service. Well, it was something I did. It was something I reported on. It was something where we agreed that we'd change things if customers felt the way I did. 

For well over six months in heavy use, we did not hear back. Until now. 

I could take this as "it's just time for adding that functionality", in incremental fashion. It's not like anyone was relaxing and slacking off in the meanwhile. Or, I could take it as yet another consideration of what goes wrong with providing and accepting feedback. 

When we build incrementally, I find we are not often concerned on things beyond the immediate agreed scope. It takes quite a skill in seeing connections in the product ownership to see the whole, when development teams tend to focus on the slice they're pointed at. 

The long delay in feedback brings things back as surprises to current plans. There was the feedback with a short delay that got dismissed. But all in all, reacting to this feedback right now with a short cycle to delivery we've built makes the customer happy regardless. In a world where they are still used to wait 6 months or more for a resolution, delivering fast makes them feel happier than having it right the first time. 

Thursday, December 7, 2017

Becoming a Feedback Fairy

Late in the evening of a speakers' dinner at CraftConf 2017, I met a new person. He was a speaker, just like me, except that when he asked on what I would speak on, he used the words: "Explain it to me like I am not in this field, and I don't understand all the lingo".

I remember not having the words. But this little encounter with a man I can't even name made it into my talk the next day when I first time introduced myself in a new way:

"I'm Maaret and I'm a feedback fairy. It means that I use my magical powers (testing skills) to bring around the gift of deep and thoughtful feedback on the applications we build and the ways we build them. I do this on time for us to react, and with a smile on my face."

That little encounter coined something I had already been coming to from other ends. There were two other, prior events that had also their impact.

At DevOxx conference some time ago, I did a talk about Learning Programming. Someone in the audience gave me feedback, explaining that they liked my talk. The positive feedback as it was phrased made an impact, as they expressed that they'd ask me to be their godmother, unless that place was already up for grabs for J.K. Rowlings. As a dedicated Harry Potter fan, being next on anything from J.K. Rowlings is probably the nicest thing anyone can say.

As I received this feedback, I shared it with the women in testing group, and a new friend in the group picked it up. As I was doing my first ever open conference international keynote, she brought me a gift you can nowadays see in my twitter background: a fairy godmother doll, to remind me of my true powers.

For the Ministry of Testing Masterclass this week, I again introduced myself as a feedback fairy.
You can be a feedback fairy too, or whatever helps you communicate what you do. There's an upside on being a magical creature: I don't have to live to the rules set by the mortals. 

Friday, December 1, 2017

Sustainability and Causes of Conferences

Tonight is one of those where I think I've created a monster. I created #PayToSpeak discussion, or better framed, brought the discussion that was already out there outside our testing bubble inside it and gave it a hashtag.

The reason why I think it is a monster is that most people pitching into the conversation have very limited experience in the problem that it is a part of.

My bias prior to experience

Before I started organizing European Testing Conference, I was a conference speaker and a local (free) conference organizer. I believed strongly that the speakers make the conference.

I discounted two things as I had no experience of then:

  1. Value of organizer work (in particular marketing) in bringing in the people
  2. Conference longevity / sustainability
Both of these things mean that the conference organizers need to make revenue to pay expenses while the conference itself is not ongoing. 

Choices in different conferences

My favorite pick on #PayToSpeak Conferences is EuroSTAR, so let's take a more detailed look at them.
  • A big commercial organization, paying salary of a full team throughout the year
  • Building a community for marketing purposes (and to benefit us all while at it) is core activity invested in
  • Pays honorarium + travel to keynote speakers
  • Pays nothing for a common speaker, but gives an entry ticket to the conference
  • Is able to accept speakers without considering where they are from as all common speakers cost the same
  • Significant money for a participant to get into the conference, lots of sponsors seeking contacts with the participants
I suspect but don't really know that they might still have revenue of the conference after using some of the income on running the organization for a full year. But I don't really know. I know their choice is not to invest in the common speaker and believe it lowers the quality of talks they are able to provide. 

Another example to pick on would be Tampere Goes Agile - an Agile conference in Finland I used to organize. 
  • A virtual project organization within a non-profit, set up for running each year
  • No activity outside the conference except planning & preparation of the conference
  • Pays travel to all speakers, can't pay special honorarium to keynote speakers
  • Runs on sponsors money and stops when no one sponsors
  • Is not able to get big established speaker names, as they don't pay the speakers
  • Requires almost zero marketing effort, straightforward to organize
  • Free to attend to all participants
Bottom line


PayToSpeak is not about conferences trying to rip us speakers off when they ask us to cover our expenses. Conferences make different choices on the ticket price (availability to participants with amount of sponsor activities) and investment / risk allocations.

Deciding to pay the speakers is a huge financial risk if paying people don't show up.
Paying speakers travel conditionally (if people show up) does not work out.
Big name keynote speakers expect typically 5-15k of guaranteed compensation in addition to their travel expenses being covered.

Conferences decide where they put their money: participants (low ticket prices), speakers (higher ticket prices with arguably better quality content), keynote speakers (who wouldn't show up without the money) or organizers (real work that deserves to be paid or will not continue long).

#PayToSpeak speaks from a speakers perspective. We can make choices of being able to afford particular conferences due to speaker-friendly choices they make.

Options

If we understand that there are two problems #PayToSpeak mixes up, we may find ideas of how to improve the current state:

  1. Commonly appearing (but not famous) speakers need not to Pay to Speak to afford speaking.
  2. New voices with low financial possibilities need not to Pay to Speak to afford speaking. 
If some conference does relevant work for 2, as a representative of 1 I would consider paying to speak. But I would have to choose like one per year, because that is not out of my company's pocket, but my own. 

If some conference collects money for a cause in a transparent way, I again would consider paying to speak, capping the number I can do in a year. 

There are options to removing Pay to Speak:
  • Seek local speakers (build a local community that grows awesome speakers), and paying the expenses is not a blocker as the costs are small
  • Commit to paying speaker expenses, but actively invite companies they work for to pay if possible to support your cause. See what that does. 
  • Set one track to experiment with paying expenses and compare submission to that track to others, with e.g. attendee numbers and scores. 
  • Say you pay travel costs on request, and collect the info of who requests it with call for proposals
  • Team up with some non-profit on this cause and give them money for scholarships for some speakers. 
You can probably think some more. 

Conferences, none of them are inherently evil. Some of them are out of my reach as they are #PayToSpeak. And I'm not a consultant, nor work for a company that finds testers their marketing group. If I have to #PayToSpeak, I can't. I will remain local, and online. 

There's people like me, better than me, who have not started off by paying their dues of getting a little bit of name in some #PayToSpeak conference. I want to promote them the options of not having to #PayToSpeak. 




Why defining a conference talk level means nothing

Some weeks back, unlike my usual commitment to follow my immediate energy, I made a blogging commitment:
The commitment almost slipped, only to be rescued today by Fiona Charles saying the exact same thing. So now I just get to follow my energy on saying what I feel needs to be said.

Background
 
As you submit a talk proposal, there's all these fields to fill in. One of the fields is one that asks the level of the talk. The level then appears later as a color coding on the program, suggesting to be the among three most important information people use to select sessions. The other important bits are the speaker name (which only matters if the speaker is famous) and the talk title. On how to deal with  talk titles, you might want to check out the advice in European Testing Conference blog.

The beginner/intermediate/advanced talk split comes in many forms. Nordic Testing Days in particular caught my eye with the "like fish in the sea", "tipping your toes" metaphoric approach, but it is still the same concept.

The problem

To believe concepts like beginner/intermediate/advanced talk levels are useful, you need to believe that we compare people in a topic like this.
This same belief system is what we often need to talk about when we talk about imposter syndrome - we think knowledge and skill is linearly comparable, when it actually isn't.

The solution

We need to think of knowledge and skills we teach differently - as a multi-dimensional field.

Every expert has more to learn and every novice has things to teach. When we understand that knowing things and applying things isn't linear, we get to appreciate that every single interaction can teach us things. It could encourage the "juniors" to be less apologetic. It could encourage the "intermediate" to feel like they are already sufficient at something even if not everything. And it could fix the "experts" attitudes towards juniors where interaction is approached with preaching and debate, over dialog with the idea of the expert learning from the junior just as much as the other way around.

So, the conference sessions....

I believe the best conference sessions even on advanced topics are intended for basic audiences. This is because expertise isn't shared. We don't have a shared foundation. Two experts are not the same.

It's not about targeting to beginner / advanced, it's about building a talk on a relevant topic so that it speaks to a multi-dimensional audience.

As someone with 23 years of industry experience, even my basic talks have some depth others don't. And my advanced talks are very basic, as I need to drag entire audiences to complex ideas like never writing a bug report again in their tester career.

We need more good talks that are digestible for varied audiences, less of random labeling for the level of the talk. In other words, all great talks are for beginners. We're all beginners in the others perspective. 

Wednesday, November 29, 2017

Not excited about pull requests

There was a new feature that needed to be added. As the company culture goes, Alice volunteered for the job. She read code around the changes to identify the resident local style in the module, knowing how much of an argument there can be over tabs and spaces. She carefully crafted her changes, created unit tests, built the application and saw her additions working well with the product. She had the habit of testing a little more than the company standard required, she cared a lot of what she was building. She even invited the team’s resident tester to look at the feature together with her. 

All was set except the last step. The Pull Request. It had grown into a bit of a painful thing. Yes, they were always talking about making your change set small to make the review easier, and she felt this was one of the small ones, just as they were targeting. But as soon as her pull request was created, the feedback started.

If she was lucky, there was just the comments on someone’s preferred style on formatting - over all the codebases they were working on, there still was no commonly agreed style and automatic formatting and linting was only available on some of the projects. But more often than not, every single pull request started a significant thread of discussions of options of how the same thing could be done. 

Sometimes she would argue for her solution. But most of the time, she would give in, and just change as suggested. Or commanded, she felt. It was either a fight or submission. And the one with the power, reviewing the pull requests, somehow always ended up with their way. 

After the rewrite to the solution of the people leaving comments, Alice quickly runs unit tests. But that’s it. The version that ends up in production does not get the tester’s eyes before merging, nor the careful testing in the product context Alice put on the first version she created. 

Another time, her change was just a fix on something that was evidently broken. The pull request rumba started again, with requirements of cleaning up everything that was wrong around the place that had been changed. This time she gave up again - accepting the rejection of the pull request, someone else would get to enjoy the next try of fixing. The “perfect or nothing” attitude won again. 
When Alice was free to review Bob’s pull request, she too mimicked the company culture. “This is the way it works”, she thought. She felt that if she said less than others, it would reflect badly on her. So she said as much as she could say. Shared other way of implementing things. And just as Alice would change her code to comments, so would Bob. Knowing the difference of “here’s another way I thought of, for your information” and “I won’t accept without you changing this” had become impossible to differentiate. 

This story is fictional, and Alice (and Bob) was just the person that got on this fictional project. But the dynamic is very real. It happens to developers with all levels of experience, when the culture around pull requests goes into aiming for perfection instead of good enough. It happens with the culture of delayed feedback with pull requests, with refusal to pair. There’s many ways of implementing the same things, and sometimes arguing about my way / your way AFTER my way was implemented gets overwhelming. 

Here’s what I’d like to see more:
  • suggest changes only when that is needed not because you can
  • improve the culture created around “acceptance” power dynamic and remove some of the power reviewers hold as guardians of the codebase
  • when suggesting extensive changes, go to the person and volunteer to pair. 
  • volunteer to pair before the work has been done
Writing this reminds me how nice it was to mob on some of the code, when the whole pain and motivation drain related to pull requests was absent. 

Tuesday, November 28, 2017

Camps of testing

The testing world is divided, sometimes I feel it is divided to an extent that can never be resolved. Friendly co-existence seems difficult. Yet, the effort to work together, and to hear out the others, to have the crucial conversations about something all the divided camps in their own way hold dear needs to happen.

I think these camps are not clearly formulated, but they feel very real when you end up in disagreement. So I wanted to write about the three that came my way just today.

Testing should be a task not a role -camp

There's a significant group of people that have hurt my feelings in agile conferences telling me I am no longer welcome. Testing should be a task not a role is usually their slogan. They seem to believe that professional testers and testing specialists should not be hired, but all programming and testing work should be intertwined into teams of generalists. Of course generalists are not all made of the same wood, so often these generalists can be programming testing specialists, but the part about programming is often the key.

I've seen this work great. I can see it might work great even in my team. But then I probably wouldn't be in my team. And my team couldn't say things like "things were bad for us before Maaret joined". Because the things I bring with me are not just the stuff around automated or exploratory testing of the product, I also tend to hold space for safety and learning. And I do work with code, but more like 20% of my time.

I hypothesize that if this was the dominant perspective, we would have even less women's voices in software development. And I would choose having diverse views through roles over homogenizing any day.

Testers vs. Developers -camp

This is my reframe of the group of people I would think of as testing traditionalists. They're building a profession of testers, but often with the idea of emphasizing the importance of the job/role/position through pointing out how developers fail at testing. They make jokes on test automation engineers being not developers not testers (bad at both). They often emphasize the traditional tester trainings and certifications. They don't mean to set us up to two camps, but much of communication around us and them feels very protective.

I have seen this be commonplace. I have not seen it work great. Creating separate goals for testers (go find bugs!) and developers (get the solutions out there as specified!) isn't helping us finish on time, and making awesome software.

Developers working with testers in this camp have a tendency of becoming religious about Testing should not be a role -camp, if they care for quality. If they just work here and do what is told, they probably will live with whatever structure their organization put them into.

Testers and Developers -camp

I would like to believe there is a group of people like me, who don't identify with either of the camp archetypes defined above. They believe there can be professional testers (profession/role/job/position, whatever you call it) and some of them can be awesome with automation focus, and some of them can be awesome with exploratory testing focus. They might cross role-task boundaries frequently, in particular through pairing. The keyword is collaboration. Bringing the best of us, a group of diverse people with differing interests showing up and differing skills areas, into the work we are doing by collaborating.

This group tends to shift left, until there is no more left or right as things turn continuous.

Where does this lead us? 

As with the stuff around schools of testing, this is putting people into boxes that are defined through trying to describe what is good about the way I think. I will continue to evangelize the idea of letting people like me - and people like me 5 years ago - and people like me 20 years ago - enter the field and learn to love it as I have learned to love it. I know I make a positive difference in my projects. I belong here. And I know others like me belong here.

I want to see us thinking of ways of bringing people in, not closing them out. I'm open for new ideas how that could be possible for those who realize they want to be programmers only after they have become excellent through deep, continuous learning of things that are not programming but make us excellent exploratory testers. And it might take some decades of personal experiences. 

Playing with rotation time in a mob

A common thing to happen in a retrospective after Mob Testing is that someone points out that they feel they need more time as designated navigator / driver, "I need more time to finish my thought". Today, it happened again.

I facilitated the group on a 3-minute timer. I emphasized the idea that this is not about taking turns on each one of us executing our individual test ideas, but it's about the group working on a shared task, meaning same ideas, bringing the best of each of us into the work we're doing.

On two retrospectives with a 3-minute timer, the feedback was the same: make the time longer. So I did what I always do in these cases: I made the time shorter, and we moved to 2-minute rotation.

The dynamic changed. People finally accepted it wasn't about finishing their individual task, but to finish the group's shared task.

A lot of times when people feel they need more time, they are saying they have their own individual idea that they don't share with others. Longer time allows this. Shorter time forces the illusion of that being useful out of the group. 

Monday, November 27, 2017

A Search for Clear Assignments

I spent a wonderful day Mob Testing with a bright group of people in Portugal today. They left me thinking deeply on two things:

  1. Importance of working in one's native language - the dynamic of seeing them work in English vs. local language was immense. 
  2. Need for clear plans
I wanted to talk a bit about the latter.

I've been an exploratory tester for a long time. I've worked with projects with very detailed specifications, with the end result of having a system that worked as specified 100% but filled 0% of the real use cases the system was intended for. I've worked with projects that don't bother with specifications at all - all is based on discussions around whiteboards. And I've worked with projects where all specifications are executable, with the experience that to make it possible we often like to minimize the problem we're solving to something we can execute around. 

The first exercise we do on my mob testing course involves an application that has very little documentation. And the little documentation it has, most people don't realize to go search for. The three sessions we did gave an interesting comparison.

First session was freeform exploration, and the group was (as usual) all over the place. They would click a bit here, move to somewhere completely different, check a few things there and make pretty much no notes other than bugs I strongly guide them to write down. The group reported as experience that they were "missing a plan".

Second session was constrained exploration by focusing on a particular area of functionality. The group was more focused, had hard time naming functionalities they saw and finishing testing of things they saw. Again the group reported as experience that they were "missing a plan" even if the box kept them more cohesive in the work they shared. 

The third session was tailored specifically for this group, and I had not done that before. I allowed the group 30 minutes to generate a plan. They selected a feature with a web page claim after discussing what the unit of planning should be (use cases? user interface elements? claims on the web page?). Before spending any additional time hands on with the application on top of the two sessions earlier that had barely scratched the surface of the feature, they cleared up their plan. 

The interesting outcome was that
  • They found less bugs
  • They were happier working against a recreated (low quality) checklist
  • They missed more bugs they saw while testing and dismissed them as irrelevant. 
  • I saw some of the symptoms they saw as symptoms of something really significantly broken in the application, and having seen them test, I now know how I could isolate it. I suspect there are only a few people in that group who would know what needs more focus on. 
I take this as a (trained) wish for clear assignments, clear answers and generally a world where we would have our tasks clear. I find myself thinking it is not the testing that I know, and that it is the testing a lot of my automator colleagues know. And that in getting out of that need of someone else placing us the "plan" and being active about making and changing our own plans as we learn is the core to good results. 

We all come from different experiences. My experiences suggest that my active role as a software learner is crucial. Having documentation and plans is good, but having the mindset of expanding those for relevant feedback is better. 

Thursday, November 23, 2017

My team is looking for a manager

I love my team, and I love my manager. My current manager however has come to the realization, that having 50 direct reports is too much, and while he always has time for me, there might be others that need different type of support from a manager and don't get the same access.

At first, he opened two manager positions internally. Both my manager and my team encouraged me to apply for the one for us, but I have other aspirations as I plan on being the best tester there is, and if I move, I will become a software architect. A trip (again) to management sounds like the wrong move for me. Everyone else had similar ideas internally, so we ended up where we are now: we are looking, externally for our team manager.

We're a really truly self-organized team (no product owner experiment ongoing, the team decides) and need a manager who understands what that means. And an ideal person for the role would be someone who is half tester (or dev) and half manager, and would like the idea of working as part of the team for some of their time.

As we were discussing this yesterday with the team, devs expressed that they'd love a tester. Well, they have good experiences of testers. And to clarify, they said it could be either someone with manual testing or test automation background.

So I call out to people I know: would working in Helsinki, in a half tester half manager role with awesome team becoming more awesome all the time be something you'd aspire for? If so, this is the position you should apply for.

Just a few words what usually happens: my manager screens candidates and discusses the manager part. He involves the team in another interview.

We're also looking someone with good understanding of full-stack development into another similar position for team that does Java/Javascript/AWS.  And if you're like the team members we have now, aspiring for learning more on how to build awesome products and you want to spend time with an automation emphasis, we're also looking to replace one of our full-time testers as they turn into a cyber security consultant.

All these 3 people end up in what I consider the product I work with, consisting of 7 teams. If you'd like to try out something further from me, my manager is also looking for a tester for cloud protection solutions.

Saturday, November 18, 2017

Observations on Survivorship Bias in Programming

Programming is a fascinating field of study. For a long time, I was discounting my programmer abilities for never staying around one language long enough to get fluent with syntax, and if you need google to remember when others (supposingly) didn't, it must mean that I'm not a programmer.

Mob programming changed that for me. I saw that others needed to google heavily too. Just like me, they remembered there were many sorting algorithms, but needed to check how a particular one would be implemented - they knew, just like me, the word to google for. And obviously, it's not like that is a problem we need to solve every day in the real life of a programmer, that's where libraries come in.

There's still other things that come in as a little voice in my head, trying to talk down the programmer me.
  • I don't take joy in all the same types of programming work as others around me do
  • I don't pay attention to all the cool libraries that are emerging and their inner workings for comparison purposes, still often happy to go with what I got
  • I'm more interested in mender than maker types of programming
I realized that instead of listing the things I do, my voice inside my head lists the things I don't do. And as so many women before me, when I don't tick all the possible boxes, I don't apply. Instead of a job application, I do this on what I identify myself as - usually as a tester, not a programmer. Yet recently working with my inner voice, as (polyglot) programmer in addition to all the other things I am.

There's a thing referred to as survivorship bias. It is our unfounded belief that when we are successful, the things we did are what are needed for success. All of this, while there might not be as strong correlation as we like to tell ourselves.

So if a programmer does well mostly talking about technologies in particular way, both they and others around them attribute the visible interest in technologies as a reason they are a good programmer. A new line to a list of what we must be to qualify is born.

If a programmer does well not collaborating with others, just focusing on solo work to think deeply around the problem, both they and others around them attribute ability to work alone as a reason they are a good programmer. A new line to a list of what we must be to qualify is born.

There's many behaviors we see successful people do. Without mobbing, the control on what people choose us to see is on them. But each thing creates a new line to a list of what we must be to qualify. 

Survivorship bias is strong in programming, and it results in lists that feel impossible to tick. And when the visible behaviors around you differ, it can be really easy to discount what you are.

We need to make it easier to be a programmer. It's not an end, it is a journey one can start. And there's many paths we can take. Be careful not to force the path you have chosen, be open to other options. 



Sunday, November 12, 2017

Why Do I Go to Conferences?

I find myself asking this question more often these days: why do I go to conferences? And in particular, why do I speak at conferences? And my answers vary, as I really don't know.

This week I spoke at Oredev, a developer conference, and felt totally disconnected and invisible. I did not talk to any new people. And new people did not talk to me. At first, I was quick to blame it on a tester identity, but it isn't that as I also identify as a polyglot programmer. I just did not have the chances for a discussion without first being active on it and even when I did, topics changed from tech to life. I listened to many sessions, some really great and others not so much, and came back with a decision on cutting down on conferences.

I used to get learning from conferences, but now my "being aware of techniques" learning quota feels full. Knowing of AWS, SAM, lambdas and step functions takes me somewhere, but the real application of those ideas takes me a lot further. And conferencing is threatening my time for practice.

My situation with this is not quite the usual one. I've been collecting the number of talks I do per year, and I already decided to cut down a year ago. Yet, looking at where I ended up isn't exactly showing the commitment: I have 27 sessions in 2017. 30 each year for 2016 and 2015. At this point of my life, talks emerge from my need of organizing my thoughts and driving my learning, and there are smaller time investments that would give me the same value.

So I wonder if people are finding pieces of joy, enlightenment, thoughts from whatever I end up sharing. Maybe that is worth the investment? There was one women I can thank for from Oredev that really made my day, coming to say one thing to me after my talk: "Thank you for speaking. It is so wonderful seeing women in tech conference stages." Most people say nothing, and pondering on this made me realize one of my speaking motivations is that that I crave for acceptance and acknowledgement.

Thinking a little further, I was thinking of the test conferences I find the most valuable for me: TestBashes. I've come back from those with new colleagues in the community to learn with, even friends. People I get to meet elsewhere, who bring so much joy into my life. But in particular, I remembered there is one accomplishment from each test bash that fills my heart with joy: I came back with a connection that created a new speaker.

Thank you Gita Malinovska, Bhagya Perera and Kate Paulk for making me feel like I had a role to play in the world seeing how awesome speakers you are. Gita and Bhagya I mentored in speaking after TestBashes brought us together, and they never really needed me but I needed them. Kate blew my mind with the discussions we had in TestBash Philly a year ago, when she seemed shy to take the stage, and I feel so proud seeing she delivered awesome in TestBash Philly this year.

There's a lot more names that I could drop that make me feel like I've served a purpose. But these three remind me that without going to conferences, our paths might not have crossed.

So I go to conferences for:

  • Collecting ideas that I need time to turn to actions at work
  • Making friends and maintaining our relationship
  • Encouraging awesome people to be the inspiration on stage they are off stage
I speak to make it cheaper to go. I speak in hope of recognition. I speak in hope of connection, as I have hard time initiating discussions. But most of all, I speak to sort out my own thoughts. 

What are your reasons? 



Monday, November 6, 2017

Making Everyone a Leader

A year ago, some people were fitting the "Leader" title for me in the context of the testing community, and I felt strongly about rejecting it. We have enough of self-appointed leaders, calling for followers and I felt - still feel - that we need to stop following and start finding our own paths, and just share enthusiastically.

Today, someone was fitting the "Leader" title on me in the context of work and our No Product Owner experiment. I was just as quick rejecting it, but this time realizing even more strongly that I believe that for good self-organizing teams, everyone needs to become a leader instead of one self-appointed or group-selected leader.

I believe our No Product Owner experiment will show its best sides if I manage to avoid being appointed the "leader". There will be things where people choose to follow me, like the idea of experimenting with something we thought is out of our reach, meeting people who "never have time" yet find time in three days when we ask, imagining the possible. But each one in my team will lead us on different things, I follow with joy. Today I followed one of my developers on them leading the way on solving customer problems and they could use my contribution. I followed another one of my developers in supporting him when he imagined a feature that we thought wasn't wanted that turned out to be the "dream we did not dare to hope for". I followed my two tester colleagues in solving puzzles around automation where recognizing an element (no Selenium involved) wasn't straightforward, and working together was beneficial.

Everyone has room to be a leader. We don't have to choose one. We can let one emerge for different themes.

And what makes a leader, anyway? It's the followers. I choose to follow leaders in training, enthusiastically. It does wonders to how my group works. Maybe it would do wonders on communities too? 

Thursday, November 2, 2017

Trawling for feedback

As a team without a product owner, we needed to figure out what is our idea of what someone with product management specialization could bring us? And we hit the discussion around the mysterious "customer voice".

At first we realized that having someone allocated as a "customer voice" with "decision power" (a product owner), isn't an automatic ticket for the team to hear any of that. So we ended up identifying a significant chunk of work the team isn't doing, would love done better and goes under the theme of trawling for feedback.

With customers in the millions, there's the feedback we get without them saying anything through monitoring. Knowing when they have troubles, knowing when they use features we think they must be using, all of that is primarily a privacy but then a technical challenge. Just the idea of going for no product owner made us amp up our ability to see without invading privacy. It was necessary before, but now it was our decision to include the tools that improve our ability to help our customers. Mental state does wonders, it seems.

Then there's the feedback that requires time and effort. Reading emails and replying them. Meeting people on specific topics and meeting people on general topic for serendipitous feedback to emerge. There's the skill of recognizing what feedback is critical. Being available for feedback, and identifying what of it is the core, not the noise. And passing the core to people who can do something about it - the team.

We realized there is a fascinating aspect of timing to this feedback trawling. Think of is as comparison to trawling for fish.

If you get a lot of fish without the proper storing and processing facilities, it will go bad and get wasted.

Feedback is similar - it is at its maximum power when fresh. That extra piece of motivation when you see the real person behind the feedback gets lost when we store the feedback for an appropriate time to act on it.

Having to deal with lots of fish at once creates costs without adding much value. 

While writing down a thing is a small cost, going through all the things we have written down, telling people of their status, and our ideas of their importance isn't a small cost anymore.

If you pass a fish from the fisherman to the second person in the chain, it is not as fresh anymore. 

First-hand delivered feedback to the developers just is different. It's more likely to be acted on the right way, with added innovation.







Tuesday, October 31, 2017

Starting a No Product Owner Experiment

In the usual two week cadence, our product owner showed up our team room looking exhausted. We were glad to hear it wasn't the fact that he came to do planning with us, but that the early hit of winter and snow and related tire change left him feeling beat up.

Our Product Owner usually sends us a ton of emails (forwarded bits he wanted us to react on), shows up regularly every two weeks, and whenever we ping him in between. The is not a part of our team and does not sit with us. He used to, but in an experiment to change the way the team communicated, was moved further away to make a huge positive impact. The developers who used to report to him changed reporting to the team, and we have not been the the same since.

We started going through the usual motions of how my team plans, listing things that we knew that needed addressing. I was making my own notes on a computer without sharing a screen, hoping someone else would step up and write stuff on our whiteboard like we always do, without success. The list was starting to get long, and yet another thing came up. The product owner spoke up: "I want you to prioritize this", cutting the discussion leading to understanding with the power voice. I could feel the energy sinking.

So I stepped in.

"Hey, this would seem like the time to introduce this thing we've been talking about for the last month or so with the product owner. We've agreed to try out an experiment with no product owner."

It wasn't the first time my team heard of it. It was definitely not the first time the product owner heard of it, as it was a stretch we had agreed on together.

I summarized the change in the context of this meeting that the team now has the control (and responsibility) of priority calls. We did not have one person who "wants us to do this", but we had a team that was set out to be customer obsessed and care enough to understand what would be our real options and the right choices.

With an agreement to agree on what PO used to do and what it really means to not have a product owner but a PMS (Product Management Specialist - the person is still around for helping), we continued planning with high energy.

There was a little bit of rebellion in the air. But everyone discussed things together, heard each other out and ended up with exactly the thing our ex-PO wanted us to prioritize - our route, feeling more energized.

The outcome of the rebellious planning seemed better than what I had come to expect. There was no passivity around "officially delivered items", where the real outcome of what would happen would end up very different. We talked enthusiastically about improving some of our core automations, agreed on pairs that could contribute on things, and prioritized our short term backlog.

My team does biweekly planning, but it is more of a biweekly backlog creation. And out of that list, we just work on items more in a kanban style.

My first lesson: it's not about what the change is, it's about changing. Trying something new out is energizing. Our real challenges of understanding what "No PO for three months" means are still ahead of us. 

Sunday, October 29, 2017

Yes, I am a manual tester and ...

In my team, we have two testers and our interests and contributions couldn't be more more different. While I like to refer to myself as an exploratory tester, most people around me think of me as a manual tester. I try very hard not to correct their terminology, but I often use the improv Yes and... rule to add to what others say.

Yes, I'm a manual tester and I create disposable automation.
Yes, I'm a manual tester and we just addressed some major issues that would have been next to impossible to spot without thinking hard around the product.
Yes, I'm a manual tester and while my hands are on the keyboard, the software as my external imagination speaks to me.

The practice of avoiding correcting people's established terminology is not "help to cheapen and demean the craft"(1). That practice is focusing on what matters, and what matters is us collaborating, creating awesome stuff.

I might not have always liked the terms manual and automated testing, but I can live with with established vocabulary. Instead of changing the vocabulary, I prefer changing people's perceptions. And the people who matter are not random people on twitter, but the ones I work with, create with, every office day.

Looking at the two testers in my team, I can very easily see why I'm the "manual tester" - I think best with hands on keyboard, using the product as my external imagination. I prefer to bias myself to experiencing much of the testing as the users would - using the product. Even when I test with code, I prefer to bias myself on using the APIs as users would. The mechanism of running the test - manual - leaves me time and focus to think well with the product giving me hints on where to go next.

Similarly, I can easily see why the other test is automated tester. Their focus is on getting a program, unattended to run the tests. They too think hard (often with less of an external imagination due to focusing on coding around a problem) while creating the script to run, and are all the time, with each test, forced to focus on the details. So their tool and approach of choice biases them to experience the product as a program can. The mechanism of running the test - automated, eventually - leaves them time to do more automated tests. Or rather, try to figure out why the ones created are failing, again.

Together, we're awesome. If we were more inclined to move between the roles of manual and automated tester, we'd be more awesome individually. But as it stands now, we have plenty of bugs to find that automation couldn't find: they are about aiming for the wrong scope. The person doing automation could find them, if all their energy wasn't going into the details of how hard automating a Windows GUI application can be. So right now we're lucky to have two people, with different focuses.

I wrote this inspired by this - Reference (1):

So here. I just cheapened and demeaned the craft. Except I didn't. The word policing is getting old. The next generation of manual testers can just show how awesome we are, giving up the chains of test cases and thinking well with our hands on they keyboard and our brains activated.

Imagine what would work be like if we stopped policing the word choices and approached communication with a yes and -attitude.

Wednesday, October 25, 2017

Your stack isn't as full as mine

Recently, I've seen increasing amount of discussion on "full-stack engineer" coming my way. Just as it was important to me at some point to be identifying clearly as a "tester", I have now colleagues who find similar passionate meaning around the "full-stack engineer".

Typically, one of these full-stack engineers works on both front-end and back-end. So on the web stack, typically they cover the two ends.

An engineer wouldn't be properly full-stack if the stack did not include both dev and ops. So being DevOps is clearly a part of it. 

Some of these full-stack engineers take particular pride in including testing (as artifact creation) into their stack of skills. They can deal with testing so that they are not dependent on testers, and more importantly, so that they feel safer making changes throughout the stack that is getting too big to keep in mind at once.

The first place where the fullness of the stack starts really breaking is when we face the customer. Can a full-stack engineer expect to get all their requirements cleanly sliced, or should they also have capabilities on understanding, collecting and prioritizing the customer's explicit and implicit wishes? This is usually where a full-stack developer throws responsibility over to a complete product owner, who magically has the right answers. A Complete Product Owner is the customer-side match for the Full-Stack Developer.

And for me, the idea of being Full-Stack Developer breaks in another way too. The web stack isn't always the full stack. For me it most definitely is less than half of the stack. The system created with the web stack is just as dependent on the system created on C++/Python -mix than the other way around.

So frankly, my dear full-stackers. Your stack isn't full enough yet. Time to move towards polyglot. Onwards, to the unicorn state.

*said on a day I had to look at C/C++, Python, Groovy, JavaScript/Angular, Java, CSS for work, and C# for hobbies. I feel slightly exhausted but certain it isn't going to change for anything but wider.

Saturday, October 21, 2017

How is European Testing Conference Different?

One sunny afternoon in San Diego over three years ago, I took a call with Adi Bolboaca. That call has since it happened defined a lot of what my "hobbies" are since (conference organizing) but also set an example of how I deal with things in general. From idea to schedule that call was all it took. We decided to start a conference.

The Conference was named European Testing Conference to reflect its vision: we were building the go-to testing conference in Europe and we'd take on the challenge of having the conference travel. In the three edition so far, we work with Bucharest (Romania), Helsinki (Finland) and Amsterdam (Netherlands).

As the Amsterdam Edition is well on its way to take place in February 19-20th 2018, someone asked how we position ourselves - how is European Testing Conference different?

Testing, not testers

Our organizers are an equal mix of people who identify as tester and programmers. What brings us together is the interest to testing. The conference looks at testing as different roles do it and seeks to emphasize the collaboration of different perspectives in making awesome products. We like to think of testing as Elisabeth Hendrickson said it: it is too important to be left just for the specialized testers. Our abilities to solve the puzzles around feedback make a difference for quality, speed of delivery and long-term satisfaction for those of us who build the software.

Practical focus

We seek to make space for sessions that are practical, meaning they are more on the what and how as opposed to why, and they are more on the patterns and practices. We start with the idea that testing is important and necessary, and seek to raise the bar in how testing is done.

Enabling peer learning

We know that best sessions in conferences with regards to learning often happen in the hallway track where people are in control of the discussions they engage in. Many conferences formalize hallway track to happen on the side. We formalize hallway track sessions to be a part of the program so that we increase the chances of everyone going home with a great, actionable learning from peers.

Peer learning happens with interactive sessions that have just enough structure so that you don't have to be a superb networker, you can just go with the flow. As a matter of fact, we don't give you choice of passively sitting listening to a talk when you could learn from your peers in an interactive format, so these session are always conference wide.

The do three different kinds of interactive sessions:

  • Speed Meet makes you go through people while giving the structure to ensure that it's not the usual chit chat of me introducing myself, it is learner driven what the introducer gets to share. Each participant creates a mind map, and the person you get to know will drive the discussion based on what they select on your map. 
  • Lean Coffee is a a chance of discussing testing topics of the whole group's choice. Regardless of its name, it is more about discussions and less about coffee. We invite our speakers to facilitate tables of discussions, so this is also your chance of digging in deeper to any of the topics close to heart of our speakers. 
  • Open Space makes everyone a speaker. A good way to prepare for this is to think about what kinds of topics you'd love to discuss or what knowledge you'd like to share. You get to propose sessions, and they could also be on topics you know little of but want to learn more about. 

Lean Coffee and Open Space are regular sessions in conferences, but we have not seen anyone else do them as part of the day program, whole conference wide. You will meet people in this conference, not just listen to the speakers we selected.

Schedule by session types

Interactive sessions have no talk sessions to listen to passively at the same time. Similarly, when talk sessions take place, we have four of them scheduled on tracks. We also have in-conference workshops, and again when it's time to workshop, there's no talk sessions available simultaneously. This is to encourage a mix of ways of learning. It's hard enough to select which topic to go for, and if the session type is also a variable, it just gets harder to get the learning mix right.

Speakers selected on speaking their stories

All speakers we have selected have been through a collaborative selection process. This means that we did not select them based on what they wrote and promised they could talk on, we had a chat with each and every speaker and know how they speak. We're hyped about the contents they have to share as part of our great program.

Some of the talks are not ones the speakers submitted. When collaborating with a speaker, sometimes you learn that they have something great to share that they did not themselves realize they should have submitted.

Track speakers are keynote quality

We take pride in treating our speakers fair, meaning we guarantee them that they don't have to pay to speak but we compensate the direct costs of joining our conference. We go a bit further, sharing profits with the speakers. This means that the speakers are awesome. They are not traveling to speak with the vendor marketing budget to sell a tool or service, but are practitioners and great speakers.

Enabling paired sessions

Our program has sessions with two speakers, and when we select a session like that, we pay the expenses of both the speakers. While we strongly believe that a two person talk is not a talk where two people take turns on delivering a talk one could deliver, we actively identify lessons that require two people. We pair in software development, we should be able to pair with our talks too.

Organized by testing practitioners

Our big fancy team is a team of practitioners doing the conference as a hobby. We love learning together, creating together and making a great program of testing available together. We spend our days testing and programming. We know what the day to day challenges are and what we need to learn. Our practitioner background is a foundation for our ability to select the right contents.

Traveling around Europe

Europe is diverse area, and we travel around to connect with many local communities. It sometimes feels ambitious of us, as every year we have a new community to find and connect with to sell our tickets. Yet, going to places and taking awesome content to places is what builds as forward as a bigger community. 

We love other testing conferences

We don't believe that the field of testing conferences is full - there's so many people to reach and enable to join the learning in conferences. If your content and schedules are not right for you, we encourage you to look at the other. We love in particular conferences that enable speakers without commercial interest by paying their expenses and often give a shout out to TestBashes (all of them!), Agile Testing Days (both Germany and USA), and are delighted to be able to mention also Nordic Testing Days, Copenhagen Context and Romanian Testing Conference. 




Friday, October 20, 2017

Sharing while minding the price of shame

Some weeks ago, I was sitting at the London Heathrow airport, with a little post-it note in front of me saying "review, write, finalize". I had three separate writing assignments waiting for dealing with, and what would be a better place to deal with writing than being stuck at airport or a plane. I started with the review, to an article that is now posted and still causing buzz on my twitter feed: Cassandra Leung's account of power misuse in the testing community by Mr Creep.

Reading it made me immediately realize I knew who Mr Creep was, and that I had tolerated his inappropriateness in a different scale just so that I could speak at his conferences. I knew that with who I was now, I was safe. I had the privilege of thinking his behavior was disgusting and yet tolerating it. Reading the experience through the eyes of someone with less privilege was painful.

I could do something. So I talked to this Mr Creep's boss. They did all the rest. Shortly after, I saw this Mr Creep changing his status for in LinkedIn to Looking for new opportunities. I can only hope that the consequences of his own actions would make him realize how inappropriate they were. And that he would learn new ways of thinking and acting. At least, he no longer is in this position of power. He's still an international speaker. Hopefully he is no longer the person who thinks this slide is funny:


This slide happened years ago. We did not realize what it could mean for the women in the community. While the source of this is this Mr Creep, there's other creepy "funny" slides with exclusive impact. Don't be Mr Creep when you present. 

The article that started this for me came out later. At time of publishing, the proper reaction from the conference was already a fact, making this article very special amongst all the accounts of creepy behavior. This Mr Creep remains unnamed. And I believe that is how it should be. In the era where a lot of our easy access to power comes through social media, there are kinder forms of displaying power and expressing inappropriateness. Bullying the bully or harassing the harasser are not real options. For a powerful message on the price of shame, listen to a TED talk by Monica Lewinsky.

Calls like this are also common:
Outing him could be necessary if we didn't know he was already addressed. But all too late.

There's been a number of women who have also come forward (naming in private) with their own experiences with Mr Creep in the testing community, some with exact same pattern. Others shared how they always said no to conference speaking in places associated with him. And the message of how unsafe even one Mr Creep could make things for women became more pronounced.

In the last days of me thinking of writing this article, my motives were questioned: maybe I just want to claim the credit for action? But there is a bigger reason that won't leave me alone before I write this:

I need to let other women know that they have the power to make a difference. When it appears that organizations will prioritize their own, sometimes they prioritize their community. They need someone to come forward. If you've been the victim, you don't need to come forward alone. Coming forward via proxy is what we started with here. And after creating the feeling of safety, we brought down the proxy structure giving power where it belongs - back with the victim.

I need to let other women know that the conference we've talked about in hushed voices has chances of again being a safe place for us to speak at.

I need to let the everyone know that seeing all male lineups may mean that all the women chose to stay safe and not go.

I was in a position of privilege to take the message forward. I was an invited international speaker, with an open invitation to future conferences I was ready to drop. I had a platform that gave me power I don't always have. But most of all, I couldn't let this be. I had to see our options.

While what I did was one discussion of someone else's experience, it drained me. It left me in a place where I couldn't speak of my experience as part of this. It left me with guilt, second guessing if other people's choices of boycotting would really have been an option. It left me with fear that Mr Creep targets his upset on me (haven't seen that so far). But most of all, it fill me with regrets as I now know that I could have made choices of addressing the problem a lot earlier.

Mr Creep had to hurt one more person before I was ready to step up. Mr Creep got to exist while I had something I could personally lose on outing him or confronting him on any of his behaviors.

I need to write this article to move forward, and start my own recovery. This Mr Creep is one person, and there's many more like that around. Let's just calling out inappropriateness while considering the appropriate channels. 

Monday, October 16, 2017

Innocent until proven guilty

I read a post Four Episodes of Sexism at Tech Community Events, and How I Came Out of the (Eventually) Positive and while all the accounts are all too familiar, there is one aspect that I feel strongly about. Story #3 recounts:
It takes me two years to muster the confidence to go to another tech event.
The lesson here is that it is ok to remove yourself from situations where you don't feel comfortable. There is a very real option for many people that we don't show up because someone can make us feel uncomfortable in ways that matter.

I hate the ways people report being made feel uncomfortable. And I particularly hate when someone reports a case where they were made uncomfortable being dismissed or belittled by the organizers of conferences because there is a belief that the "offenses" are universally comparable. That alleged perpetrators are always innocent until proven guilty. This idea is what makes people, word against word in positions of unequal power, allow for the bad behaviors to continue.

There will not be clear cut rules of what you can and cannot do in conferences to keep the space safe. Generally speaking, it is usually better to err on the side of safe. So if you meet someone you like beyond professional interests in a professional conference, not expressing the interest is on the safe side.

Some years ago, I was in a conference where someone left half-way though the conference for someone else's bad behavior. I have no clue what the bad behavior was, and yet I side with the victim. For me, it is better to err on the side of safe again, and in professional context reports like this don't get made lightly. Making false claims is not the common way of getting rid of people, even if that gets recounted with innocent until proven guilty.

We will need to figure out good ways of mediating issues. Should a sexist remark cost you a place in the conference you've paid for - I think yes. Should a private conversation making others overhearing it cost you a place in the conference you've paid for - I think yes. On some occasions, an apology could be enough of a remediation, but sometimes protecting the person who was made feel unsafe takes priority and people misbehaving don't have the automatic access to knowing who to get back to for potential retaliation. It's a hard balance.

The shit people say leave their marks. I try not to actively think of my experiences, even forget them. I don't want to remember how often saying no to a personal advance has meant losing access to a professional resource. I don't want to remember how I've been advised on clothing to wear while speaking in public. I don't want to remember how my mistakes have been attributed to whole of my gender. There's just so much I don't want to remember.

Consequences of bad behaviors can be severe. Maybe you get kicked out of 2000 euro conference. Maybe you get fired from the job. Maybe you get publicly shamed. Maybe you lose a bunch of customers.

Maybe you learn and change. And if you do, hopefully people acknowledge the growth and change.

If you don't learn and change, perhaps excluding one to not exclude others is the right thing to do.

In professional settings we don't usually address litigation, just consequences of actions and actions to create safer spaces. Maybe that means taking the person stepping forward feeling offended seriously, even when there is no proof of guilt.

I don't want people reporting years of mustering the confidence to join the communities again. And even worse, many people reporting they never joined the communities again, leaving the whole industry. I find myself regularly in the verge of that. Choosing self-protection. Choosing the right to feel comfortable instead of being continuously attacked. And I'm a fighter. 

Saturday, October 14, 2017

Caring for Credit

Last three years have been a major source of introspection, as I've taken upon the journey of becoming (more) comfortable with pairing and mobbing. Every time someone hints that I want to pair to "avoid doing my own work", I flinch. I hear the remark echoing in my head, emphasizing my own uncertainties and experiences. Yet, I know we do better together and fixing problems as they are born is better than fixing them afterwards.

A big part of the way I feel is the way I was taught to feel while at university. As one of the 2% women minority studying computer science, I still remember the internal dialogue I had to go through. The damage the few individuals telling me that I probably "smiled my way through programming classes", making me avoid group work and need of proving my contribution in a group being anything more than just smiling. And I remember how those experiences enforced the individual contributor in me. Being a woman was different and I took every precaution I could to be awesome as much by myself as I could. If I ever did not care for doing more than others, that would immediately backfire. And even if I did care, I could still hear the remarks. I cared then, I still do. And I wish I wouldn't. 

My professional tester identity is a way of caring for credit. It says about what of all the things I do are so special that I would like it to be separately identified. It isn't keeping me in a box that makes me do just testing, but it says that that is where I shine. That is where I contribute the most of my proud moments. Yet it says that I'm somehow a service provider, probably not in the limelight of getting credit, and I often go back to liking the phrase:
Best ideas wins when you care about the work over credit.
I want to create awesome software, and be recognized for my contributions to it.Yet I know that my need of recognition is a way of not getting the best ideas to win - nor anyone else need of recognition.

As a woman, attribution need can get particularly powerful. If you're asked of the great names in software, most people don't start listing women - even if that has recently changed in my bubble that talks about awesome people like Ada Lovelace, Margaret Hamilton, and Grace Hopper. And looking a little beyond into science, listing women becomes even less of a thing.

The one man we generally tend to think of first in science is Einstein. Recently I learned that he had a wife, who was also a physicist and contributed significantly to his groundbreaking science. He did not raise her significant contributions to general public.  Meanwhile, Marie Curie is another name we'd recognize and the reason recognition is tied to her is due to her (male) colleagues actively attributing work to her. 

Things worth mentioning are usually a result of group work, yet we tend to attribute them to individuals. When we eat a delicious cake, we can't say if it was great because of the sugar, the eggs or the butter. All were needed for the cake to become what it is. Yet in creating software products, we tend to have one ingredient (programming) take all the credit. Does attribution matter then? 

It matters when someone touts "no women of merit" just for not recognizing the merited woman around them. It matters when people's contributions are assessed. Reading a recent research saying that women researchers must publish without men to get attributed and thus tenure made me realize how much the world I was taught in school still exists. 

People are inherently attribution seeking - we wish to be remembered, to leave our mark, to make a difference. A great example of this is the consideration of why there are no baby dinosaurs - leading to a realization that 5/12 identified species are actually just adolescent versions of the adult counterparts. 

From all of my talks, the bit that always goes viral is adaptation of James Bach's saying: 
I've lived this for years and years, and built a whole story of illusions I've broken, driving my tester identity through illusion identification. Yet, I will always be the person popularizing past sayings. 

Caring for credit, in my experience, does more harm than good. But that is what humanity is built around. Take this as a call of actively sharing the credit, even when you feel a big part of credit should belong to you. We build stuff together. And we're better off together. 

Friday, October 13, 2017

What a Product Owner Does?

As an awesome tester, I find myself very often pairing with a product owner on figuring out how to fix our ways of working so that we could have best chances of success when we discover the features while delivering them. My experience has been that while a lot of the automation-focused people pair testers up with developers, the pairing on risk and feedback with the product owner can be just as (if not more) relevant.

Over the years, I've spent my fair share shaping up my skills of seeing illusions on a business perspective, and dispelling them hopefully before they do too much damage. Learning to write and speak persuasively is part of that. I've read tons of business books and articles, and find that lessons learned from those are a core to what I still do as a tester.

I find that a good high-level outline of the skills areas I've worked on is available with the Complete Product Owner poster. Everything a product owner needs to know is useful in providing testing services.
Being a Product Owner sure is a lot of work! - William Gill

In preparation of a "No Product Owner" experiment, I made a list of some of my expectations on what a product owner might do (together with the team).

What a Product Owner Does?
  • has and conveys a product vision
  • maintains (creates and refines) a product backlog - the visible (short) and the invisible (all wishes)
  • represents a solid understanding of the user, the market, the competition and future trends
  • allows finishing things started at least for a preagreed time-box
  • splits large stories to smaller value deliveries
  • prepares stories to development ready (discovery work)
  • communicates the product status and future to other stakeholders
  • engages real customers and acts as their proxy
  • calculates ROI before and after delivery to learn about business priorities
  • accepts or rejects the work done
  • maintains conceptual and technical integrity of the product 
  • defines story acceptance criteria
  • does release planning
  • shares insights about the product to stakeholders through e.g. demos 
  • identifies and coordinates pieces to overall customer value from other teams
  • ensures delivery of product (software + documentation + services) 
  • responds to stakeholder feedback on wishes of future improvements and necessary fixes
  • explains the product to stakeholders and drives improvement of documentation that enables support to deal with questions
These people are busy, and can use help. How much of your testing is making the life of a product owner easier? 

Thursday, October 12, 2017

Run the code samples in docs

I was preparing for a session on exploratory testing in a conference, wanting to make a point of how testing an API is just like testing a text field. Even the IDE you use just gives you keyword recognition based on just a few letters, and whatever values you pass in are a guided activity. The thinking around those fields is what matters. And as I was preparing, I went to my favorite test API and was delighted to notice that since the public testing sessions pain, there was now some basic getting started documentation with examples.

I copypasted an example around approving arrays into the IDE, and things did not go as I would have expected. Compiler was giving me errors, and I could extend my energy barely to see what the error message was about. There were simple errors:
  • a line of code was missing semicolon
  • a variable a was introduced, yet when it was used it got renamed to x
As a proper tester, I was more happy than sad with the lack of quality in the documentation, and caused a bit of pain to the poor developer asking not to fix in for a few hours so that I could have other testers also see how easy finding problems in a system is because documentation is part of your system. I think the example worked nicely around encouraging anyone to explore an API with its documentation.

The cause of the problem I saw was that the code sample was never really executed. And over time even if it was executed once, it could break with changes as it wasn't always executed with the code.

A lot of times, we think of (unit) tests as executable documentation. They stay true to functionality if we keep on making them pass as we change the software. Tests work well to document drivers. But for documenting frameworks, you need examples of how it calls you. It makes sense to do the examples so that you can run them - whether they are tests or other form of documentation.

Documentation is part of your API. Most of us like to start with an example. And most of us choose something else if possible if your documentation isn't helpful. Keep it running.