Wednesday, August 31, 2016

A peculiar relationship to typos

I was sitting in a meeting, looking at code. A summer interns last day at office. The code "works" but has not been tested by anyone other than him. The last I tried checking it out, I got no further than 1st page which was completely empty. Serendipitously I tried a use case where the two role requirement for the feature did not stand (not by accident though...)

As it was the last day, it is obvious to me that someone else will take over maintenance (fixing) of the feature. Then again, I have two more days at the office myself, so chances are no one will notice if it is broken.

We sit together and look at what he has done. He opens a page of code, in small font and before I can stop myself, I point out: "the method on line 1233 has typo". The feeling was peculiar. First of all, it made me realize how I see typos. It's almost as if they were in a different dimensional layer. They block my view of other things. Second, it takes me significant effort to not point them out and go past them. I can do that, but it drains me more than I feel it should.

These developers have had enough contact with me that they deal with this peculiarity quite wonderfully. Refactoring the name took seconds, and was repeated on a few other occasions over the thousands of lines we were eyeing.

It reminded me that I often find myself explaining to developers how people are different in how they deal with typos. Some people never seem to notice them. Some people (like myself) work hard to move past them or find approaches where they cause less of a hassle than if you would go through a Jira-ticket process to get them fixed.

We need to care for different needs. And it just so happens, groups identifying as testers seem to have a little more of people like me who use significant effort to move past typos.


Tester / Programmer happiness in specialization

Browsing through Twitter, I run into an interesting question:
At first I thought, what *happy*means? That there's never any negative emotions? No frustration? But with a little pondering, I came to the conclusion that happy is just in relation to being happy without the separate roles.

I believe that in the last four years in particular, me and my programmers have had a decently happy co-existence with separate developer and testing roles.

When we need to dig in deep into the code and figure out adding a functionality or a fix, programmers know their way around (as in reminding themselves what was there, none remembers this stuff by heart). When we think we're clear on what we're building or what we've built, I dig in deep to see flaws and omissions.

Surely, I touch code. Surely, programmers test. But that is not what makes our roles separate. The separation comes from deep development of skills that sometimes appears like magic.

With the separate roles, some of the best experiences come from mob programming. Mentally connected and engaged, sharing a purpose without sharing the role. Paying attention to different details. And correcting things as they emerge without ego in play.

Not that there's much ego in play in everyday life too. I've been fortunate enough to work with programmers who get praised for good quality they deliver (with me) who remember things were different (without me). They invite my feedback. They test themselves yet miss connections.

Me and my programmers are not bored. We are not helpless and powerless. We're just as respected (and valued in financial terms) and supporting one another.

We work together well. I can't say that about all the groups I've worked with as tester.

My sources of unhappiness don't come out of the separate tester / developer roles. They come from either role having *unskilled* people who *refuse to learn*. With or without the role separation, these types of people need to first get out of the idea that bad work is acceptable day in and out.

Natalie asks in her tweet chain: "When the tester role works, what does that look like?"

It looks like mutual caring and respect, and drive for deep skills and improvement to create awesome products in collaboration.

Friday, August 26, 2016

How would I describe my new job?

Counting down to last week of my time at Granlund that I've wholeheartedly enjoyed, I'm starting to be dropping mentions on life after Granlund.

I have a new job. I've signed the contract. I've talked about the job but I have not done the job. Inspired by a tweet, I wanted to write down that I think it will be about.

My new job is with F-Secure. I'll work on the Corporate products, and in particular, the corporate security client product. I used to work for F-Secure 7 years ago, so I'm returning to a place I loved, that has changed just as much as I have changed while gone.

Last time I was there, I worked on the consumer client. Corporate is different. If there's one perception of how it is different that I have, it is one where the control over the environment isn't with the product company. Moving from a world where I could at will install new version for customers on a daily basis, this environment is bound to be more constrained.

My role is one of a Lead Quality Engineer. I have no idea what it really means or if people have expectations on it. For me it means I go to be a hands-on tester, who is senior enough to do test leadership through focusing on empirical evidence. I will never be "just a tester", but I will never be "just a manager" either. I'm both. And I will be a programmer whenever I feel like it, even if that identity still gets under-shadowed by others I hold more dear to me.

There's a few things I look forward in particular:
  • Solving the continuous feedback puzzle for the corporate side
  • Pair-testing with a dedicated automation specialist
  • Figuring out team work when there's an invisible wall of technology selection (C++ / Python) 
  • Untangling interrelations through empirical focus in cross-team larger organization setting
  • Delivering the first version on a seemingly unrealistic schedule by focusing on small incremental pieces of value - do less, smarter
  • Working with other testers who want to be awesome (this was what I loved about F-Secure 7 years ago - co-creating innovations on how we work) 
  • Organizing bunch of local meetups with a company that has great location and openness to invite others to learn with us
  • Having conference (keynote) speaking as part of my work instead of my hobby on the side
I can't wait to start on these. Sad to go, and excited to start on something new. 

Wednesday, August 24, 2016

Visual Talk Feedback

Last spring, a Finnish colleague was preparing for his short presentation at an international conference. I've had a long-going habit of practicing my talks with the local community, and he decided to do the same. He invited a group together to hear him deliver the talk.

It was an interesting talk, and yet we had a lot to say on the details of it. Before we got to share what was on our mind, one of the people in audience suggested a small individual exercise. We jotted down the main points of the story line chronologically on a whiteboard together. Then each of us took a moment to think how engaged we felt at different points of the presentation. Finally, we all took turns on drawing our feeling timeline on the whiteboard.


What surprised us all in the visualization was how differently we saw what the engaging key takeaway moments were for us. A diverse group appreciated different points!

Knowing the overall frame of some of us liking almost all parts of the presentation provided a great frame for talking around the improvement details each of us had. It also generated improvement ideas that were not available without the image we could refer to.

Today, I was listening to a talk in a coaching session over Skype. I was jotting down the story line and thinking how I could visualize the same thing online. Next time, I'll get Paper on iPad out early on and share that with who ever I'm mentoring. It will provide us an anchor to see how the talk improves. And it gives a mechanism of inviting your other practice audiences to engage in giving feedback.


It was just a typo

As a tester, I'm a big believer in fixing UI string typos as I see them. You know, just going into the code, and spending a little time fixing the problem instead of spending the same (or more) on reporting it. These are just typos. These are changes that I find easy to understand. And yet, I test after I make those changes.

In the last week, I was testing a cleanup routine. Over the years, our product database has piled up quite a lot of cruft, not the least for an earlier (imperfect) implementation that created duplicate rows for everything that was touched in a common user flow. I asked the developer who created the cleanup routine on what it was about and his advice on testing it, to learn that it was "straightforward" and that he had spent time on testing it.

As we run the cleanup routine on the full data, it became obvious that "testing it" was not what I would mean by testing it. The cleanup for a copy of production data took 6 hours - something that no one could estimate before the run started. Meanwhile, the database shouldn't be touched or things will get messy.

So we talked about how we cannot update the production like we do for test environments - hoping none will touch it. We need to actually turn things off and show a message for the poor users.

The six hours and 1/3 of database size vanishing hints to me that this is straightforward because our data is far from straightforward. With very first tests, I discovered that there was data lost that shouldn't be, resulting in what we would refer to as a severe problem of the product's main feature of reports not working at all. To find the problem, all I needed to do is to try out the features that rely on the data, starting from most important: this one.

Fast-forward a few days there's a fix for the problem. And the developer tells me it was just a typo. Somewhere in the queries with IDs, one of the groupings was missing a prefix. We talk on the importance of testing and I share what I will do next, to learn he had not thought of it. I joke to tell him that at least he did not tell me it was just a 10 minute fix after using significant amount of my time on making him even aware that a fix is needed.

The phrase it was just a typo is interesting. We're an industry that has blown up space rockets for just a typo. We have lost significant amounts of money for various organizations for just a typo. Just a typo might be one of the most common sources of severe problems.

For any developers out there - I love the moment when you learn that it's not about the extent of the fix but extent of the problem the lack of fix causes. I respect the fact that there's fixes that are hard and complex. Just a typo is a way of expressing this isn't one of those. 

Tuesday, August 23, 2016

Just read the check-in!

Today was one of the days when something again emerged. At first, there was a hunch of a sort order messing bug and all of a sudden there was a fix for it.

The fix came with very little explanation. So my tester detective hunch drove me to the routines that I do. I went to see the check-in from the version control and routinely opened up the three changed files without any intention of actually reading them.

The first thing I realized is that the files that were changing had names that matched my idea of what I would be thinking of testing. It's been more often that I care to remember that this was not the case.

The first nagging feeling came from realizing there were three files. A small fix and three files changing. So I looked at the diffs to see that the changes were more extensive than the "I fixed a bug" gave warrant for.

I walked up to the developer and asked about the changes "So you needed to rewrite the sorting?" to learn that it was long due.

With a little routine investigative work, I had two things I wouldn't have otherwise:
  1. An actual content discussion with the developer who thought that the change he was making was obvious
  2. A wider set of testing ideas I would spend time on to understand if the re-newly implemented feature would serve us as well as the bad old one had. 
There's so much more to having access to your version control as a tester than reviewing code or checking in your code/changes. Looking at check-ins improves communications and keeps absent-minded developers honest. 

Circular discussion pattern with ApprovalTests

At Agile 2016 Monday evening, some people from the Testing track got together for a dinner. Discussions lead to ApprovalTests with Llewellyn Falco, and an hour later people were starting to get a grasp of what it is. Even though I Golden Master could be quite a common concept.

Just few weeks earlier, I was showing ApprovalTests to a local friend and he felt very confused with the whole concept.

Confusion happens a lot. For me it was helpful to understand, over longer period of time that:
  • The "right" level of comparison could be Asserts (hand-crafted checks) vs. Approvals (pushing results to file & recognizing / reviewing for correctness before approving as checks). 
  • You can make a golden master of just about anything you can represent in a file, not just text. 
  • The custom asserts are packaged clean-up extensions for types of objects that make verifying that type of object even more straightforward. 
Last week, I watched my European Testing Conference co-organizers Aki Salmi and Llewellyn Falco work on the conference website. There was contents I wanted to add that the platform did not support without a significant restructuring effort. The site is nothing fancy, just Jekyll + markup files built into HTML. It has just a few pages.

As they paired, the first thing they added was ApprovalTests for the current pages to keep them under control while restructuring. For the upcoming couple of hours, I just listened in to them stumbling on various types of unexpected problems that the tests caught, and moving fast to fix things and adjust whatever they were changing. I felt I was listening to the magic of "proper unit tests" that I so rarely get to see as part of my work.

Aki tweeted after the session: 
If you go see the tweet I quoted, an exemplary confusion happens as a result of it.
  1. Someone states ApprovalTests are somehow special / good idea.
  2. Someone else asks why they are different from normal tests
  3. An example is given of how they are different
  4. The example is dismissed as something you wouldn't want to test anyway
I don't mean to pick on the person in this particular discussion, as what he says is something that happens again and again. It seems that it takes time for the conceptual differences of ApprovalTests in unit testing to sink in to see the potential.

I look at these discussions more on the positives of what happens to the programming work when these are around, and I see it again and again. In hands of Llewellyn Falco and anyone who pairs with him, ApprovalTests are magical. Finding a way of expressing that magic is a wonderful puzzle that often directs my thinking around testing ApprovalTests. 

Thursday, August 18, 2016

Defining our SLA retroactively

I've been fluctuating between focused and energetic to get all the stuff in as good order as I possibly can before changing jobs (my last day at Granlund is 2.9) and sad and panicky about actually having to leave my team.

Today was one of those sad and panicky days, as I learned that the last three things coming out of our pipeline did not really work quite as expected but feedback was needed.

We changed a little editing feature with new rules, resulting in inability to do any editing for that type of objects after the change - the data does not adhere yet to the new rules and it was not "part of the assignment" to care for the data. And yet, we never release things intentionally that would break production.

We cleaned up some data with straightforward rules that shouldn't impact anything. Except they completely broke our reporting feature that bases on the unclean data.

We nearly finished the main feature area we've been working on for months (too big!!) except that I know that from today's "just five more fixes" there's bound to be 3, 2 and 1 more to go.

I love the fact that my team's developers have a great track record on fixes not breaking things worse. That they take the time and thought on what the feedback is instead of patching around. And that they do the extra mile around what I had realized if only they can make the connection. They care.

All of these experiences lead me to a discussion with our product owner on time after I have left. I was suggesting he might want to pay more attention to what comes out of the pipeline after I am gone. His idea was different, and interesting.

He said that the team's current ability to deliver working software without his intervention, just pulling him in as needed, is as he sees the R&D SLA (service level agreement). He expects the team to re-fill the positions to continue delivering to the SLA.

Remembering back four years on the same person's shock on "The software can work when it comes to me?!?!? I thought that is impossible!", we've come a long way.

I'm proud of my contribution, but even more, I'm proud of my team for accepting and welcoming my help in making them awesome. It's great to see that we've created an SLA retroactively to define that good is what we have now. And yet, it can still get better.

The team is looking for a tester to replace me. It's a great job that I wouldn't have left behind unless there was another even greater I couldn't have without the experiences this job allowed me. You can find me at maaret@iki.fi. 


Friday, August 12, 2016

The programming non-programmer

Over the years, I've time and time again referred to myself as a non-programmer. For me, it means that I've always rather spent my time on something else than writing code. And there's a lot to do in software projects, other than writing code.

This post is inspired by a comment I saw today "I found Ruby easy to learn, I'm not a programmer". It reminded me that there's a whole tribe of people who identify as programming non-programmers. Some of us like code and coding a lot. But there's something other than programming that defines our identity.

Many of my tribe are testers. And amongst these testers, there's many who are more technical than others give them credit for, identifying as non-programmers.

I've spent over a year trying to learn to say that I'm a tester and a programmer. It's still hard. It's hard even if over the years starting from university studies I've written code in 13 languages - not counting HTML/CSS.

Why would anyone who can program identify as non-programmer?

People have wondered. I've wondered. I don't know the reasons of all others. For some, it might be a way of saying that I'm not writing production code. Or that I write code using stack overflow (don't we all...). Or that I'm not as fluent as I imagine others to be.

For me being a non-programmer is about safety.

I'll share a few memories.

Back to School

In university on one of the programming courses, I was working like crazy to get the stuff done on the deadlines. Learning all that was needed to complete assignments, not having a demo programming background from age of 12 meant a lot of catching up. The school never really taught anything. They passed you a challenge and either you got it done, or did some more research to learn enough to get it done. We did not have much of stack overflow back then.

There was no pair programming. It was very much solo work. And the environment emphasized the solo work reminding me regularly that one of my male classmates must have done my assignments. That girls get coding assignments done by smiling. I don't think I really cared, but looking back, it was enough to not ask for help. I can do things myself. An attitude I struggle to let go of decades later.

Coming to Testing

I soon learned software development had things other than programming. And awareness of programming would not hurt anyway. I fell in love with testing and the super-power of empirical evidence.

There was a point in time when everyone hated testers - or so it felt. Not respected, cornered into stupid work of running manual test cases, reporting issues of no relevance in the end of the waterfall where nothing could be done based on the provided feedback. A lot of attitudes. Attitudes the testers' community still reports, even if my day-to-day has been lucky to be free of those for quite some time.

My gender was never an issue as a tester. My role was an issue that overshadowed everything else.

Programming tasks

When I was a non-programmer, it wasn't really about me when I got to hear that a friend has never seen a woman who is any good as a programmer. I cared about being good at what I do, but as a non-programmer, that wasn't about me. I got to hear that two colleagues talked about women never writing anything but comments in code. Well, I didn't write even the comments in any code they had seen, again not about me. And if there was code I for any reason wrote, the help in reviewing and extensive feedback to help me learn was overwhelming. It felt like everyone volunteered to help me out to a point of making me escape.

Every time I write code, I can't forget that I'm a woman. Every time I go to coding events, I can't forget that I'm a woman. Even when people are nice and say nothing about it.

As a tester, I'm just me. And you know, no one is *just* anything. If I was a programmer, I would have left this industry a long time ago. Saying I am a programmer makes me still uneasy - after 13 languages. I get most of the good stuff (geeky discussions) but much less of the bad when I'm a tester - tester extraordinaire!

Being a programming non-programmer is safe. Being a non-programmer is safe.






Wednesday, August 10, 2016

A dig deeper into conference organizing

As Software Testing Club shared my wishful post about conferences changing on making speakers pay for speaking (through speaker covered travel expenses), there was a response that leads me to share advice I gave for someone else in private.
When I pay the travel expenses, I care a little about where the person comes. True. But simultaneously, I care about representativeness. I live in Finland, and I know a lot of awesome speakers from US, Canada and UK. Unsurprisingly, the natively English-speaking countries dominate my view of the speaking world. I recognize this, and I know Finland and the rest of of Europe are full of just as amazing experiences, that might fit my cultural context much better than e.g. american views. I want to hear from diverse groups of people and I'm willing to invest in that.

Looking at this from another angle, if speakers must pay their own way, doesn't that strongly disfavor the foreign speakers? They can't afford to submit, unless they are privileged in that ability to pay. The organizers then select from the submissions that include only ones that can afford to pay, and while they can disregard the cost factor, they still include risk factor. Someone from far away might not have understood the implications of acceptance (happens often) and is more likely to cancel and cause replanning contents at a late stage.

All this made me think of an email that I wanted to share anonymously: to realize there's local and international conferences, even if CFPs appear to be international and these two need to act differently.

On the topic of Compensation Considerations for a Local Non-Profit Conference

I can only share my views and experiences, and happy to try to do so. 

I’ve thought that local conferences can be local in two ways: local for sourcing the speaking talent and/or local for finding the participants. A lot of times, the conferences sharing their vision on what talent pool they’re trying primarily to draw from would be helpful. For example, I rarely would accept people from far into speaking at local conferences in other than invited talks, because my primary focus is on showing something not local (keynote) and then focus on strengthening the local talent pool. People need local safer-to-fail places to dare to go and consider the international stages. Around here, the safety starts from local new speakers being allowed to speak in their native language. 

Your chosen approach of being very upfront about free admission but not paying the expenses is the industry norm. You are probably well aware that I’m standing up against that industry norm, but all the fine-tuned ideas on what and why I might not even have written about yet. The primary reason is that I want to see financial considerations stop being a block for diversity. Paying the expenses is a start, but that goal actually needs that speaking would also cover the lost income if the loss hits and individual. 

Diversity in this case is not just diversity of gender and race, it’s also diversity in voices available in our industry. In testing in particular, majority of people are not allowed in conferences by their employees other than on their own time. If you’ve never been to a conference, the likelihood of you speaking in one is low. Many companies have little interest in making  their employees speakers, and people have to be well-versed and driven to overcome the lack of guidance to that direction. Product companies have awesome experiences, but little interest (other than individual’s needs of learning) to show up as speakers, especially if they are from industries that have little to sell for my audience (some argue all have to sell their employer brand, but that tends to be a role reserved for people specialized in that). 

Locally, without adding costs to the speakers, you can do a lot for this diversity. The barrier there is first and foremost encouraging people to speak and making them realize their voices would be interesting. My observation is that locally the problem is more in the submission process. People expect the organizers to know how to find the speakers, without the speakers announcing their existence. I could talk about the models on how this works and could work indefinitely. I also recognize that things might be culturally different in other places, but in Finland relying on a call for proposals on a local conference would be an insane choice. You would only get consultants with something to sell - we’re not even that tempting holiday location.  

Some consultants sell from the stage and others don’t. I don’t want to ban consultants, but I want to find ones that don’t sell from the stage. When the costs of speaking go up, selling on stage becomes the norm. 

Sometimes, audiences don’t care if the low-fare local conference is full of sell-from-stage speakers. Sometimes, they don’t know things could be different because they’ve always been to low cost events that turn out that way. 

This is really a puzzle of balance for the organizers. You might need the budget to pay for both new, otherwise blocked voices and senior voices that don’t sell from stage. You might get lucky and find people who can afford to invest or locals who don’t need to invest. You set your price and expectations of locality. You can pay some (keynoters, make people apply for scholarship if costs are prohibitive). The senior speakers can do the math of participants and ticket prices and choose a little where they show up based on fairness and opportunity of learning. You can have higher ticket price and then hand out free admissions. You can have it affordable for everyone. Every time you have to ask for special treatment (incl. travel compensation), you lose a portion of people who would show up if things were more straightforward. 

Whether you’re non-profit or for profit, this bit works very much the same way. Both seek a way of making a profit (or not making a loss), the difference is just on what the profit is used on (and scale of it, perhaps). 

So my advice:
  • focus on sourcing speakers locally, and the cost aspect isn’t so relevant for diversity
  • recognize the other blocks for local diversity. For example, I recently learned that 50:50 pledge has collected 3000 names of women in tech, so there’s quite a number of women hoping they would be reached out to specifically, not just as a mass of “you could submit”
  • make your own choices on prices and compensations and stick to them; everyone supports you when you aim for fairness.There’s no one right answer here, you’re balancing the audience and speaker needs.



Monday, August 8, 2016

Testing Conferences and new tools

Going around Github, it soon becomes evident: there are a lot of testing tool projects out there. Some of them have become popular both looking at downloads and discussions from users others than the tool developers. Others may include new insights for actual problems we are still facing, but the sheer volume is exhausting.


As a conference organizer (and previously proposal reviewer) I get to see a lot of new tool related proposals that don't do too good in getting selected for the major conferences. Even with open-source type of approach, they still appear sales pitches. Sometimes giving the visibility is justified, and my rule of selection, I notice, is basing on the users other than the developers.

So I wonder: what are the forums in which these great innovators of new tools meet? Where do they go to see if other people are solving the same problems? Are any of the places online, so that travel cost would not be preventive?

(There's one academic background tool that I really want to get more info on, about multi-locators. Yep, the challenge of locating an element in Selenium is something I experience and would love to see if there is something that helps with that for real)

I'm also thinking through (with support from my European Testing Conference co-organizers) my personal stance: what would make a tool-based presentation stand out enough to be presented in "mainstream" conference, even when the focus is both on how testers and developers perceive testing? What are the challenges in the world of developer testing that are so interesting that they deserve to be highlighted?

Most conferences just reject these proposals, we've selected an approach where we talk to each of our submitters and with them taking the time to submit with us, I'd really like to help them forward even if the conference does not seem like the right place this time.

Ideas and views are welcome.

Sunday, August 7, 2016

Thinking of Microskills


There was a post I read on testers needing to obviously learn to code. I still think reading code and pairing on writing code that needs to be created a viable options, and will lead to the accidental learning to code too. Being a tester amongst 20 programmers, the obviousness of me needing to code isn't quite so obvious. It's obvious I need to be able to work with my lovely programmers effectively. Having any specific skill isn't going to hurt me. But when I'm missing several skills, I don't find it *obvious* that programming would be the first. I've written about this a lot before.

The post did something really nice though. It dropped out from talking about programming as if it was a general one thing only, and listed a good number of ideas of tasks that would be programming. Tasks that most testers might have done or been part of doing.

Reading about the tasks triggered me to go back to a note I wrote for myself a few days earlier, while pairing with a developer on cleaning up some test automation code. We really need to drill down in our communication.

Instead of saying programming, we need to talk about specific tasks requiring programming. Instead of talking about the skill of programming, we need to drill down to specific skills that would enable us to perform those tasks. In the world of team work, we have the option of getting comfortable with paired work and leveraging the "friends with pickup trucks", a metaphor for James Bach on knowing when you don't need a resource/skill on your own, as long as you have a friend you can rely on.

In drilling down to the skills, it's not just skills related to a task. We would benefit from microskills, things that are little but taking us towards something bigger.

An example of a microskill I was thinking while pairing was the skill of being comfortable driving through code. Knowing the CTRL+click to go into a class, or finding usages to go up in the call hierarchy.

For established programmers, all this is coming as obvious. For people new to programming, every small thing is a step forward in the learning.

We should celebrate learning microskills. Acknowledging their existence and giving them credit would make learning many things more approachable. Belittling those drives people away from this path.

Testing too has a lot of task types, skills areas and microskills. Many of those catch to new people when mobbing. Acknowledging them more actively by naming feels necessary.






Hindsight and personal responsibilities

I enjoy looking at things in retrospect. When I talk or write about things in retrospect, I often come off as someone who thinks that there was an action in my past that I alone could have done differently to change how the world is today. I know things are not that simple, but I love the thought play regardless. And while I can't change anything of the past, I don't want to spend my time worrying about things I've done. I want to choose things I will actively try to do differently in the future.

My previous point of hindsight and automation in my current job is a great one. I can't go back in time to get myself to do more of automation work. I've had my hands full in testing business models assumptions to drop features for right focus and helping make every feature count. We've all learned a lot, and in the four years, we've gone from no automation of any kind through continuous delivery without automation to continuous delivery with some automation (database checks, selenium, assert-based unit tests, approvals-based unit tests and approvals based integration tests). Most of my team now knows how to add tests on most of these types, and I do to. I just find it more important that it's the team, not me.

With reflection now that I've made my decision to move on (resigned last Friday, that's a step) I have a one month time interval to make a decision into the future to leave my team off the best I can. And that means one major feature finalization and documenting a complex area of testing as integration approval tests my team can rely on.

Another big thing I learned in hindsight was a career changing moment for me.
I don't really imagine I could have saved all those millions alone. Instead, I'm sure from the experience that empirical, hands-on evidence on what works and what doesn't wins over speculation, and being allowed to focus on hands-on testing could easily have made a difference.

It was a career changing moment though. I went back to being a tester. I deepened my skills in testing and getting my team to work with me on testing. And on the side, I learned again to do programming - on my terms, for my purposes, only in the ways I find valuable and worthy.

I'm looking forward to my new job, that will still keep me a #tester and still advancing in my career. I look forward to pairing with a specialist automation developer and with some hardcore application developers who won't touch the automation in a different language. We're going to build an awesome product. This time, without having to build up the team awesomeness first through encouraging developers beaten down by their past experiences.

One person can only do so much. Yet, one person can make a difference. A great tester reminded me on an important lesson I keep shielded: one person needs the others to make a difference and not to burn out. I can always speak of things as if they are personal responsibilities, but they are really just team contributions.





Thursday, August 4, 2016

You can't convince me with reason!

At Agile2016 in our session on Strong-Style Pair Programming, we shared a story about inviting trust when pairing. We've been telling a story with our developer-tester collaboration talk too, as it was a very core experience. The understanding of the relevance of what happened has improved since I blogged about it. Actually, it has improved through blogging about it, and making it something to have deep discussions on.

On our 1st Pair Unit Testing session, I was asking a ton of questions. I had this big picture of testing and the application, and I wanted to figure out how whatever we'd be doing would fit into my big picture. We looked at a unit test and were about to do another one, and I insisted on some information I wasn't receiving. I felt frustrated, and was quickly building up to be annoyed. Just when I was annoyed enough to call it a fail to never try something so silly again, they said: "Give me seven minutes". As a grand gesture, he took out his phone and set the timer for seven minutes. I looked at this in disbelief. How seven minutes would help here, there has been no added understanding and making the connection in 15 and we were stuck. I remember feeling almost amused when I decided to humor him and give him the seven minutes - it's not like anything is going to happen in seven minutes anyway!

We strong-style paired, and I shut up. He did not explain anything, but told me what to do and what to type. In seven minutes, it started to make sense. When the buzzer went off, we were not done but I turned it on for another seven minutes as I wanted so see us finish it. And we did.

The lesson is, we could have argued for hours. The answers I was looking for were not deliverable with the lack of experience I had at that time, but there was no way I could see that. My investment of time was about to end to the annoyance. Even if I had put in more time in the discussion, the likely result would have just been that I felt he wasted more of my time. Doing things and making actual progress builds the trust. With the experience under my belt, we were able to reflect on the experience and connect things back to things that made sense to me. I was able to do that by myself, but there were more connections to be made through retrospecting together.

Things like this happen to me all the time. You just can't convince me with reason. It would appear that reasoning is overrated for other people too.

Reason works when you have experience to fuel it. Before that, we easily operate with fears.

Little tricks like 7 minutes of trust can help. I find that working with this is a question of self-improvement. How could I try things more before I make up my mind about them? Go into the experiment mindset, and experience things. Things people suggest we'd do make sense to someone, and your own experience might be just what you need to see how it makes sense to you. 

Test automation in hindsight

For quite a bit of time now, I've no longer considered myself anti-automation. But I've still been pro-exploratory testing, very heavily. I've believed, and had evidence in my project to support it, that the exploratory testing mindset is what my programmer heavy team has been missing.

I don't find same problems over an over again. I don't test in the same ways. I take each change as a unique thing. I encourage creating test automation, but leave the hands-on work on that for my programmers. There is many of them, and just one of me.

With this approach, we've succeeded well. We've learned to release daily. We work well together. Much of my knowledge of exploratory testing has rubbed on the programmers, and they test better than before.

Recently, I've made a decision to move on from my current organization and today I believe I'm only days away from completing that decision into an action. My organization has been aware of my intentions, as the two years I promised them has turned into 4,5 wonderful years with an amazing team and product.

Simultaneously, the programmer side is going through a major change. Out of my 7 programmer colleagues, 1 moved to another business area "temporarily" half a year ago. The temporary became permanent, and 3 more people will move. And as a result of this making all of us consider our choices, 1 more programmer made a personal decision to try working with another company. So my programmer colleagues are down from 7 to 2. Quite a change.

Now the low level of test automation is becoming painful. Programmer know-how is lower and mistakes when moving to new areas and introducing new people are more likely.

Seeing a deadline to my availability makes me rethink my test strategic choices in hindsight. With the choices I made on emphasizing the exploratory and distributing that skill, I did not prepare the product well for the future now at hand. The documentation I've created will be helpful for a new tester joining in. But while I'm gone, the ability to release with the level of quality we've grown accustomed to will go down.

If I made a strategic choice of encoding as much of my knowledge into test automation, would the product have better chances of future? Would that have changed the success of the past? I can't really know.

One thing I know: I want to focus in my next job more into the idea of the product doing great without me. And that is going to mean a clear focus in figuring out how do I better balance exploratory testing and creating test automation together. It's about balancing short-term and long-term benefits. It's about balancing what I love doing and what I feel needs to be done.

I'm still proud of being a good tester as in delivering good information. I'm just admitting that right now I will be better when I start codifying more of that wisdom into test automation.

I'm delighted to realize that the last month is still my opportunity to codify one of the most complicated areas in our product, printouts. My focus is now on ensuring the product has the best chances to continue being amazing without my contribution and automation plays a big role in that.

For any of you non-programming testers out there: I was one of you just a few years ago. Mob programming and strong-style pairing made me comfortable working with code even through I was saying I'm not interested. My exploratory testing skills still make me special. Putting that together with automation makes me better. Choices in the order in which you learn are arbitrary. Learning anything takes time and focus. Starting somewhere enables you to look through your choices in hindsight, and take note on things you want to change for future. 

Wednesday, August 3, 2016

Is the world of testing conferences changing?

I was delighted today to hear on twitter a news of a change: Let's Test Conference 2017 is no longer #PayToSpeak. The conference Call for Proposals, open until Aug 21st states:

If your proposal is accepted, we offer lodging for two nights, all meals from Sunday dinner to Tuesday lunch, and this year we also offer to cover your travel expenses within reasonable fares/airfares.
Going to the conference page to verify the rumor, I also learned another great thing: the sessions they seek are hands-on testing.

Behind the scenes, I heard that another relevant conference in testing field is also thinking of moving away from #PayToSpeak. I hope it turns out to be true.

I took a screenshot of a presentation somewhere online with an amazing message.


Paying the stay AND the travel (=expenses) enables many of the voices that couldn't submit to submit.

If you are thinking of submitting and want to brush up your proposal, I'm happy to help as a Speak Easy mentor. And there's Speak Easy, who offers help in more wider scale than I can bend into.

I just went into using Calendly to book my sessions and will commit more time if you will also do a webinar for #TestGems.


Tuesday, August 2, 2016

Authoritative truths

Just in case this comes as a surprise to you: my blog is not an authoritative truth on anything. It is my way of thinking out loud, with a mild hope that someone else would find what I'm processing valuable enough to read it.

I write from various triggers. I hear something somewhere, that makes me feel I need to clarify, I will. Like with my previous post Driver is not Typist. I might suck at writing, but I tried writing about an insight I've been having teaching in mob format and listening to Woody Zuill. That we shouldn't think of the driver as just typist (especially no thinking, no talking).

You're free to think of the driver as typist if that serves you. I'm not trying to tell you that you shouldn't. I'm trying to tell that I've learned I shouldn't.

The reason I'm blogging about this is a comment James Coplien left on the post. I had no intention of offending him picking the exact words that outlined a thing that had bothered me in how I guided newbies into mobbing with Llewellyn "no thinking at the keyboard". But apparently I did, and I'm sorry.

For a brief moment, I thought I shouldn't be writing. For a brief moment, James Coplien made me think I should leave the world of software for those who apparently know so much more and have so much more experience. But I snapped out of it.

I don't believe in authoritative truths, I believe in sharing experiences and ideas. I love the quote Woody also uses often (and the book is amazing too). I don't invalidate anyone's experience,  but I try to understand my experiences better through other's experiences.


In particular, I felt sad to read this: "It is amazing that someone who claims to be driven by context took no more than 20 seconds of one example out of a 50 minute talk and found something to pick on for their own exposure and publicity on the web, without extending the courtesy of checking with me first."

I listened to the whole talk. I wasn't trying to comment on the whole talk. I failed expressing that it was great that James made the connection of Mobbing being a light in the people & interactions sector. I just focused on my insight, and making a note of it.

Blogs (to me) are not articles. I write more when I need to. I haven't by now thought about having to check all details with people that act as inspiration. And now that I think of it, I still think that I don't need to.

It's great that James added his viewpoint. I just wish that it could be done without telling me that I'm after exposure and publicity, unkind and a bad context-driven tester. Attacking my person feels unjustified.

James makes a great point in the end. Read as you choose to think for yourself, even if my style of writing isn't perfect. I'm not trying to tell you what to do. 

Monday, August 1, 2016

Driver is not a typist

I listened to a talk today on How Agile and OO have lost their way by James Coplien. At around 48 minutes into the video, James introduces Mob Programming.

"One of the very exciting things I've seen emerge recently is Mob Programming ... Kind of like pair programming, except it's the entire team. You have one person at the keyboard, they may not think, they may not talk. They only type. You have a facilitator, and the rest of the team writes the code and tells the typist what to write. Then you swap roles every few minutes."

We (myself & Llewellyn Falco) teach strong-style pair programming, especially in mob programming with the phrase "no thinking on the keyboard" that I've considered dangerous. I've recently observed many misunderstandings from this, and softened the expression to "no decisions on the keyboard". Thinking is allowed, encouraged even. Talking back to the navigators is allowed, encouraged even.

Seeing how James frames Mob Programming driver role as "they may not think, they may not talk", I feel the need to thinking about how to communicate this better.

Woody Zuill talks about a dumb input device (the keyboard) and the smart input device (the driver). The smart input device is not a typist, but a translation. The person on the keyboard primarily translates intent like "we'll need an invoice class" or "there should be a method for creating new employees" into implementation. The driver is navigated on the highest abstraction level possible, and has a lot of power in the order and details of implementation. If the navigators disagree on the choices, a discussion emerges through the continuous review. And if the driver is a beginner as programmer, the navigators drill down in the navigation instructions to the level the driver needs.

Within just a few hours, you see a newbie move from "type var" into "we'll need a variable to store the employee info" in the instructions. Details and typist-level instructions are momentary, and the option we revert to when the driver needs it and we move beyond typing quickly.

Modern Agile Testing

My most valuable takeaway from Agile2016 conference are my thoughts and feelings about the Wednesday keynote. Joshua Kerievsky presented the keynote on Modern Agile, and enjoyed his framing of brushed up principles.

The differences of emphasis on the principles of modern agile and agile as set with 2001 principles is subtle, yet relevant. And they make me think a lot about reframing some of my ideas for Modern Agile Testing.

There's four values:

  • Make people awesome
  • Make safety a prerequisite
  • Experiment and learn rapidly
  • Deliver value continuously
Make people awesome is a great framing for thinking about quality. What kind of experience with the products we create would make the user awesome? What kind of experience with creating the software would make us makers/menders awesome? What would be a little more awesome than where we are today?

Make safety a prerequisite touches me most. If we are afraid, we won't learn. And since agile, many testers have really been regularly unsafe. I've blogged about how hard it feels to be always against the  ideas that the experiences I as a tester have acquired over 20 years would no longer have value without personal commitment to automation. My personal commitment is to being better every day. Automation may play a role. In particular, it plays a role for me to collaborate better with my developer colleagues in creation of automation. If we are not safe, we can't be productive. But the principle is not just about the makers/menders being safe, but all the users of our software being safe. This is the "make world a better place, by default" principle. 

Experiment and learn rapidly is core to what testers do. We identify illusions that need to be broken, empirical evidence over speculation. Learn to learn in layers, and co-create better ways of learning. Everyone should learn about designing experiments. There's a need for the constructively critical eye, that asks if we really know what we think we know, and drives forward ideas of experimenting. 

Deliver value continuously is to say we work in small batches. Batches of value to someone. Going forward. And with each delivery, we keep safety as a prerequisite, and make people just a little more awesome. Identifying small and safe pieces takes often a focus testers have and can share. 

Joshua's talk is well worth watching when it becomes available.