Wednesday, August 24, 2016

Visual Talk Feedback

Last spring, a Finnish colleague was preparing for his short presentation at an international conference. I've had a long-going habit of practicing my talks with the local community, and he decided to do the same. He invited a group together to hear him deliver the talk.

It was an interesting talk, and yet we had a lot to say on the details of it. Before we got to share what was on our mind, one of the people in audience suggested a small individual exercise. We jotted down the main points of the story line chronologically on a whiteboard together. Then each of us took a moment to think how engaged we felt at different points of the presentation. Finally, we all took turns on drawing our feeling timeline on the whiteboard.

What surprised us all in the visualization was how differently we saw what the engaging key takeaway moments were for us. A diverse group appreciated different points!

Knowing the overall frame of some of us liking almost all parts of the presentation provided a great frame for talking around the improvement details each of us had. It also generated improvement ideas that were not available without the image we could refer to.

Today, I was listening to a talk in a coaching session over Skype. I was jotting down the story line and thinking how I could visualize the same thing online. Next time, I'll get Paper on iPad out early on and share that with who ever I'm mentoring. It will provide us an anchor to see how the talk improves. And it gives a mechanism of inviting your other practice audiences to engage in giving feedback.

It was just a typo

As a tester, I'm a big believer in fixing UI string typos as I see them. You know, just going into the code, and spending a little time fixing the problem instead of spending the same (or more) on reporting it. These are just typos. These are changes that I find easy to understand. And yet, I test after I make those changes.

In the last week, I was testing a cleanup routine. Over the years, our product database has piled up quite a lot of cruft, not the least for an earlier (imperfect) implementation that created duplicate rows for everything that was touched in a common user flow. I asked the developer who created the cleanup routine on what it was about and his advice on testing it, to learn that it was "straightforward" and that he had spent time on testing it.

As we run the cleanup routine on the full data, it became obvious that "testing it" was not what I would mean by testing it. The cleanup for a copy of production data took 6 hours - something that no one could estimate before the run started. Meanwhile, the database shouldn't be touched or things will get messy.

So we talked about how we cannot update the production like we do for test environments - hoping none will touch it. We need to actually turn things off and show a message for the poor users.

The six hours and 1/3 of database size vanishing hints to me that this is straightforward because our data is far from straightforward. With very first tests, I discovered that there was data lost that shouldn't be, resulting in what we would refer to as a severe problem of the product's main feature of reports not working at all. To find the problem, all I needed to do is to try out the features that rely on the data, starting from most important: this one.

Fast-forward a few days there's a fix for the problem. And the developer tells me it was just a typo. Somewhere in the queries with IDs, one of the groupings was missing a prefix. We talk on the importance of testing and I share what I will do next, to learn he had not thought of it. I joke to tell him that at least he did not tell me it was just a 10 minute fix after using significant amount of my time on making him even aware that a fix is needed.

The phrase it was just a typo is interesting. We're an industry that has blown up space rockets for just a typo. We have lost significant amounts of money for various organizations for just a typo. Just a typo might be one of the most common sources of severe problems.

For any developers out there - I love the moment when you learn that it's not about the extent of the fix but extent of the problem the lack of fix causes. I respect the fact that there's fixes that are hard and complex. Just a typo is a way of expressing this isn't one of those. 

Tuesday, August 23, 2016

Just read the check-in!

Today was one of the days when something again emerged. At first, there was a hunch of a sort order messing bug and all of a sudden there was a fix for it.

The fix came with very little explanation. So my tester detective hunch drove me to the routines that I do. I went to see the check-in from the version control and routinely opened up the three changed files without any intention of actually reading them.

The first thing I realized is that the files that were changing had names that matched my idea of what I would be thinking of testing. It's been more often that I care to remember that this was not the case.

The first nagging feeling came from realizing there were three files. A small fix and three files changing. So I looked at the diffs to see that the changes were more extensive than the "I fixed a bug" gave warrant for.

I walked up to the developer and asked about the changes "So you needed to rewrite the sorting?" to learn that it was long due.

With a little routine investigative work, I had two things I wouldn't have otherwise:
  1. An actual content discussion with the developer who thought that the change he was making was obvious
  2. A wider set of testing ideas I would spend time on to understand if the re-newly implemented feature would serve us as well as the bad old one had. 
There's so much more to having access to your version control as a tester than reviewing code or checking in your code/changes. Looking at check-ins improves communications and keeps absent-minded developers honest. 

Circular discussion pattern with ApprovalTests

At Agile 2016 Monday evening, some people from the Testing track got together for a dinner. Discussions lead to ApprovalTests with Llewellyn Falco, and an hour later people were starting to get a grasp of what it is. Even though I Golden Master could be quite a common concept.

Just few weeks earlier, I was showing ApprovalTests to a local friend and he felt very confused with the whole concept.

Confusion happens a lot. For me it was helpful to understand, over longer period of time that:
  • The "right" level of comparison could be Asserts (hand-crafted checks) vs. Approvals (pushing results to file & recognizing / reviewing for correctness before approving as checks). 
  • You can make a golden master of just about anything you can represent in a file, not just text. 
  • The custom asserts are packaged clean-up extensions for types of objects that make verifying that type of object even more straightforward. 
Last week, I watched my European Testing Conference co-organizers Aki Salmi and Llewellyn Falco work on the conference website. There was contents I wanted to add that the platform did not support without a significant restructuring effort. The site is nothing fancy, just Jekyll + markup files built into HTML. It has just a few pages.

As they paired, the first thing they added was ApprovalTests for the current pages to keep them under control while restructuring. For the upcoming couple of hours, I just listened in to them stumbling on various types of unexpected problems that the tests caught, and moving fast to fix things and adjust whatever they were changing. I felt I was listening to the magic of "proper unit tests" that I so rarely get to see as part of my work.

Aki tweeted after the session: 
If you go see the tweet I quoted, an exemplary confusion happens as a result of it.
  1. Someone states ApprovalTests are somehow special / good idea.
  2. Someone else asks why they are different from normal tests
  3. An example is given of how they are different
  4. The example is dismissed as something you wouldn't want to test anyway
I don't mean to pick on the person in this particular discussion, as what he says is something that happens again and again. It seems that it takes time for the conceptual differences of ApprovalTests in unit testing to sink in to see the potential.

I look at these discussions more on the positives of what happens to the programming work when these are around, and I see it again and again. In hands of Llewellyn Falco and anyone who pairs with him, ApprovalTests are magical. Finding a way of expressing that magic is a wonderful puzzle that often directs my thinking around testing ApprovalTests. 

Thursday, August 18, 2016

Defining our SLA retroactively

I've been fluctuating between focused and energetic to get all the stuff in as good order as I possibly can before changing jobs (my last day at Granlund is 2.9) and sad and panicky about actually having to leave my team.

Today was one of those sad and panicky days, as I learned that the last three things coming out of our pipeline did not really work quite as expected but feedback was needed.

We changed a little editing feature with new rules, resulting in inability to do any editing for that type of objects after the change - the data does not adhere yet to the new rules and it was not "part of the assignment" to care for the data. And yet, we never release things intentionally that would break production.

We cleaned up some data with straightforward rules that shouldn't impact anything. Except they completely broke our reporting feature that bases on the unclean data.

We nearly finished the main feature area we've been working on for months (too big!!) except that I know that from today's "just five more fixes" there's bound to be 3, 2 and 1 more to go.

I love the fact that my team's developers have a great track record on fixes not breaking things worse. That they take the time and thought on what the feedback is instead of patching around. And that they do the extra mile around what I had realized if only they can make the connection. They care.

All of these experiences lead me to a discussion with our product owner on time after I have left. I was suggesting he might want to pay more attention to what comes out of the pipeline after I am gone. His idea was different, and interesting.

He said that the team's current ability to deliver working software without his intervention, just pulling him in as needed, is as he sees the R&D SLA (service level agreement). He expects the team to re-fill the positions to continue delivering to the SLA.

Remembering back four years on the same person's shock on "The software can work when it comes to me?!?!? I thought that is impossible!", we've come a long way.

I'm proud of my contribution, but even more, I'm proud of my team for accepting and welcoming my help in making them awesome. It's great to see that we've created an SLA retroactively to define that good is what we have now. And yet, it can still get better.

The team is looking for a tester to replace me. It's a great job that I wouldn't have left behind unless there was another even greater I couldn't have without the experiences this job allowed me. You can find me at 

Friday, August 12, 2016

The programming non-programmer

Over the years, I've time and time again referred to myself as a non-programmer. For me, it means that I've always rather spent my time on something else than writing code. And there's a lot to do in software projects, other than writing code.

This post is inspired by a comment I saw today "I found Ruby easy to learn, I'm not a programmer". It reminded me that there's a whole tribe of people who identify as programming non-programmers. Some of us like code and coding a lot. But there's something other than programming that defines our identity.

Many of my tribe are testers. And amongst these testers, there's many who are more technical than others give them credit for, identifying as non-programmers.

I've spent over a year trying to learn to say that I'm a tester and a programmer. It's still hard. It's hard even if over the years starting from university studies I've written code in 13 languages - not counting HTML/CSS.

Why would anyone who can program identify as non-programmer?

People have wondered. I've wondered. I don't know the reasons of all others. For some, it might be a way of saying that I'm not writing production code. Or that I write code using stack overflow (don't we all...). Or that I'm not as fluent as I imagine others to be.

For me being a non-programmer is about safety.

I'll share a few memories.

Back to School

In university on one of the programming courses, I was working like crazy to get the stuff done on the deadlines. Learning all that was needed to complete assignments, not having a demo programming background from age of 12 meant a lot of catching up. The school never really taught anything. They passed you a challenge and either you got it done, or did some more research to learn enough to get it done. We did not have much of stack overflow back then.

There was no pair programming. It was very much solo work. And the environment emphasized the solo work reminding me regularly that one of my male classmates must have done my assignments. That girls get coding assignments done by smiling. I don't think I really cared, but looking back, it was enough to not ask for help. I can do things myself. An attitude I struggle to let go of decades later.

Coming to Testing

I soon learned software development had things other than programming. And awareness of programming would not hurt anyway. I fell in love with testing and the super-power of empirical evidence.

There was a point in time when everyone hated testers - or so it felt. Not respected, cornered into stupid work of running manual test cases, reporting issues of no relevance in the end of the waterfall where nothing could be done based on the provided feedback. A lot of attitudes. Attitudes the testers' community still reports, even if my day-to-day has been lucky to be free of those for quite some time.

My gender was never an issue as a tester. My role was an issue that overshadowed everything else.

Programming tasks

When I was a non-programmer, it wasn't really about me when I got to hear that a friend has never seen a woman who is any good as a programmer. I cared about being good at what I do, but as a non-programmer, that wasn't about me. I got to hear that two colleagues talked about women never writing anything but comments in code. Well, I didn't write even the comments in any code they had seen, again not about me. And if there was code I for any reason wrote, the help in reviewing and extensive feedback to help me learn was overwhelming. It felt like everyone volunteered to help me out to a point of making me escape.

Every time I write code, I can't forget that I'm a woman. Every time I go to coding events, I can't forget that I'm a woman. Even when people are nice and say nothing about it.

As a tester, I'm just me. And you know, no one is *just* anything. If I was a programmer, I would have left this industry a long time ago. Saying I am a programmer makes me still uneasy - after 13 languages. I get most of the good stuff (geeky discussions) but much less of the bad when I'm a tester - tester extraordinaire!

Being a programming non-programmer is safe. Being a non-programmer is safe.

Wednesday, August 10, 2016

A dig deeper into conference organizing

As Software Testing Club shared my wishful post about conferences changing on making speakers pay for speaking (through speaker covered travel expenses), there was a response that leads me to share advice I gave for someone else in private.
When I pay the travel expenses, I care a little about where the person comes. True. But simultaneously, I care about representativeness. I live in Finland, and I know a lot of awesome speakers from US, Canada and UK. Unsurprisingly, the natively English-speaking countries dominate my view of the speaking world. I recognize this, and I know Finland and the rest of of Europe are full of just as amazing experiences, that might fit my cultural context much better than e.g. american views. I want to hear from diverse groups of people and I'm willing to invest in that.

Looking at this from another angle, if speakers must pay their own way, doesn't that strongly disfavor the foreign speakers? They can't afford to submit, unless they are privileged in that ability to pay. The organizers then select from the submissions that include only ones that can afford to pay, and while they can disregard the cost factor, they still include risk factor. Someone from far away might not have understood the implications of acceptance (happens often) and is more likely to cancel and cause replanning contents at a late stage.

All this made me think of an email that I wanted to share anonymously: to realize there's local and international conferences, even if CFPs appear to be international and these two need to act differently.

On the topic of Compensation Considerations for a Local Non-Profit Conference

I can only share my views and experiences, and happy to try to do so. 

I’ve thought that local conferences can be local in two ways: local for sourcing the speaking talent and/or local for finding the participants. A lot of times, the conferences sharing their vision on what talent pool they’re trying primarily to draw from would be helpful. For example, I rarely would accept people from far into speaking at local conferences in other than invited talks, because my primary focus is on showing something not local (keynote) and then focus on strengthening the local talent pool. People need local safer-to-fail places to dare to go and consider the international stages. Around here, the safety starts from local new speakers being allowed to speak in their native language. 

Your chosen approach of being very upfront about free admission but not paying the expenses is the industry norm. You are probably well aware that I’m standing up against that industry norm, but all the fine-tuned ideas on what and why I might not even have written about yet. The primary reason is that I want to see financial considerations stop being a block for diversity. Paying the expenses is a start, but that goal actually needs that speaking would also cover the lost income if the loss hits and individual. 

Diversity in this case is not just diversity of gender and race, it’s also diversity in voices available in our industry. In testing in particular, majority of people are not allowed in conferences by their employees other than on their own time. If you’ve never been to a conference, the likelihood of you speaking in one is low. Many companies have little interest in making  their employees speakers, and people have to be well-versed and driven to overcome the lack of guidance to that direction. Product companies have awesome experiences, but little interest (other than individual’s needs of learning) to show up as speakers, especially if they are from industries that have little to sell for my audience (some argue all have to sell their employer brand, but that tends to be a role reserved for people specialized in that). 

Locally, without adding costs to the speakers, you can do a lot for this diversity. The barrier there is first and foremost encouraging people to speak and making them realize their voices would be interesting. My observation is that locally the problem is more in the submission process. People expect the organizers to know how to find the speakers, without the speakers announcing their existence. I could talk about the models on how this works and could work indefinitely. I also recognize that things might be culturally different in other places, but in Finland relying on a call for proposals on a local conference would be an insane choice. You would only get consultants with something to sell - we’re not even that tempting holiday location.  

Some consultants sell from the stage and others don’t. I don’t want to ban consultants, but I want to find ones that don’t sell from the stage. When the costs of speaking go up, selling on stage becomes the norm. 

Sometimes, audiences don’t care if the low-fare local conference is full of sell-from-stage speakers. Sometimes, they don’t know things could be different because they’ve always been to low cost events that turn out that way. 

This is really a puzzle of balance for the organizers. You might need the budget to pay for both new, otherwise blocked voices and senior voices that don’t sell from stage. You might get lucky and find people who can afford to invest or locals who don’t need to invest. You set your price and expectations of locality. You can pay some (keynoters, make people apply for scholarship if costs are prohibitive). The senior speakers can do the math of participants and ticket prices and choose a little where they show up based on fairness and opportunity of learning. You can have higher ticket price and then hand out free admissions. You can have it affordable for everyone. Every time you have to ask for special treatment (incl. travel compensation), you lose a portion of people who would show up if things were more straightforward. 

Whether you’re non-profit or for profit, this bit works very much the same way. Both seek a way of making a profit (or not making a loss), the difference is just on what the profit is used on (and scale of it, perhaps). 

So my advice:
  • focus on sourcing speakers locally, and the cost aspect isn’t so relevant for diversity
  • recognize the other blocks for local diversity. For example, I recently learned that 50:50 pledge has collected 3000 names of women in tech, so there’s quite a number of women hoping they would be reached out to specifically, not just as a mass of “you could submit”
  • make your own choices on prices and compensations and stick to them; everyone supports you when you aim for fairness.There’s no one right answer here, you’re balancing the audience and speaker needs.