Friday, February 26, 2016

Changing conversation from bugs

Today feels like a good day to celebrate the progress we've made in my team. I'm approaching four years at this job, and while I'm regularly thinking of moving on, I just love what we do.

Four years ago, one of the hallmark moments was a day few months into me being in the company, when the product manager came to my cubicle, started shaking my hand and told me first he owes me a beer. He had thought until that point that when software comes to him, he should expect it to be broken. I changed that by providing information to my developers they needed.

At first, there was a lot of pain. Our sprints were awful and always late. I convinced the team and product management to try out daily releases and things improved a lot. We also stopped estimating efforts to implement stuff, and focused our discussions on how we can make it small and whether it is the thing to do at all. All the energy that went into looking good in estimates could now go into developing ourselves and our product.

About a year ago when writing my CV as I was looking for opportunities to work in US (still looking, but I need to find my brilliant team/organization that sponsors the H1N visa). I looked at some of my results bugs and added this: "Personally reporting and getting fixed 2261 issues over a period of 2,5 years on Granlund Designer and Granlund Manager projects". I was shocked as  that means that on average I had reported 2,5 bugs every day. I had already worked on improving the skills of the developers in developing and testing, but I felt I needed to step up the game.

A year ago, I intentionally emphasized discussion and developer learning. I started pushing my team to practice together pairing (rarely, they hate it) and mobbing (more regularly, as they learned to love the learning). And the result is that I've logged 348 issues in the last year, but that is only  1,1 on a daily average. My big (and regular) successes are the times when developers get in touch explaining a feature, and just look at me and tell me what they now know is wrong with their implementation. They hit the mark better, and our relationship has increased in quality.

We release on average twice a week, and our end users rarely see problems they would complain about. The amount of test automation is ridiculously low, but the technical quality of our code (a thing I champion for regularly) is good and improving.

I feel I've gone through an intense year of changing the conversation from bugs to value and skills, and it's been worth it. For a second year in row my collaboration review with the manager focused on how much I've helped the team to grow to empirical feedback and actionable suggestions, not on finding or missing bugs.

This is why I love long-term commitment. I now know that apart from one troublesome individual (see the previous post), my team will excel without me when I leave.

Thursday, February 25, 2016

In search of strengths: when done is not done

A weekly meeting, and an unusual argument:

P: Oh, that feature of mine is ready. It's been ready now for a week. I would have already put it in production if you did not make up a rule that I can't.
T: Hmm, I must have missed something. So it's ready? You fixed all the problems. Last I tried it two weeks ago, it did not really even start.
P: It has worked for me all the time. But I did some tweaks. There was nothing significant. Just give to the customer.
T: We'll team up to test it and see where we are. We're delivering when it's team-definition ready.

It's been such a long time since I've had to endure these discussions, that I almost forgot what they are like. And that they were the standard style of discussion some 20 years ago. For most part, I'm glad those days are gone.

But what's going on here, really? Story continues with three people testing the feature in a group.

T: So, let's just go through the basic flow. This was the command, right? (setting up the data to something a user could use, instead of the developer specific version)
P: Yep. Just run that and it all works.
U: I have a few UI specific remarks, let's go though those in the context of use
P: I hate this nitpicking, couldn't you just go and fix it in the code?
T: Let's just see how it is. Oh, it crashed? It doesn't do that for you?
P: No. I guess there's a bug there. 
T: Oh, I know. This is because you expect this other thing to be run first, right? But look, my data has three things and this list has only one. 
P: The other application is done by someone else in the team, I just move the data here. But I guess there's a problem. But there's another developer who worked close to this area, talk to him first
T: Ok, so this is the summary page.
U:
T:
P: Did you have a picture of how that was supposed to be?
U: And there's still this one thing we need to improve we just talked about
...
T: Let's sum it up. Five dialogs, 2 months of work on the must do list and more on the later iterations list we thought would be in now. See why we disagreed on done yesterday? Could we work out something so that you'd believe the two of us actually know a thing or two about the product and its use, and we'd need to collaborate?
P: I guess I go work on this stuff more. 

This story has background that is not visible in these discussions: knowledge of bad code design, bad choices of how things are implemented and a 3-year experience that fixing just makes things worse. So I ask:
Here's the answer I love the most:
Now, in search of the strengths I work on empathy and hope the programmer would stop opting out of our team mob Thursdays. People are the way they are because of environment and experiences. Play the strengths. And accept that conflicts can turn into something positive, even if they are draining.

Testing isn't testing: A story of Skilled vs. Random Tester

You know, testing isn't testing. Sounds weird, right? So let me try to explain a discussion I seem to be having over and over again.

There's two people, and both of them could do testing on the exact same thing. Let's say one of them is a skilled tester - which is someone who is really paying attention to how you learn using a software product as external imagination - and the other is a random tester. The random tester might be good, but she is not the same as skilled tester, not unless focus is on learning to test great.

The random tester often is a developer, doing her end-of-development testing to the best of her abilities. And those abilities might be great, and the overall result well worth being called good enough.

When we then compare the "final testing" of a skilled tester and random tester, there seems to be this mysterious magical idea of skilled tester just being dropped in and doing their best performance. But, since the skilled tester learns in layers, the first time testing at the end of development isn't going to be the best performance. There's layer after layer to get deeper, and each layer provides useful results - just not the same.


Some managers still let skilled testers be around a little in the end, and confuse a good performance with the best performance. And many testers still confuse the Skilled Tester's good performance to the Random Tester's Best Performance, in particular when there's always an aspect of testing and learning even when you decide to all it "programming".

If you're serious about good quality, you'll want to enable the Skilled Tester's Best Performance. The Skilled Tester's good performance happens somewhere along the line, but the final show with the final results is the best you can get from her. The best performance happens at the end, after rounds of learning and practice.

So testing isn't testing. Performance and output from such a performance varies greatly. Random testers do a great job, and some of them are very close to skilled testers. It's a question of motivation and focus.

And if you wondered why this? I just had yet another encounter with someone who thinks they get to buy my best performance without practice. Because skilled must mean packaged, like a machine.




Wednesday, February 24, 2016

Small batches vs. context-switching

I've just spent an hour ranting about how much I love small batches and how I don't understand why my neighbor team still releases monthly and not daily, since we are so similar in what we should be able (and willing) to do. Then, opening twitter I get reminded on a theme that I heard but decided to redefine on the fly yesterday in Jess Ingrasselino's webinar with Ministry of Testing Masterclass on the Lone Tester: context-switching.

Jess mentions in her talk a few times that context-switching suits her working style and personality. Then she talks about the variety of types of tasks, where all have a common theme: helping quality emerge. She's in meetings with designers, developers, and support. She does exploratory testing and test automation. She works on requirements to discuss them, and the final checks. And as a lone tester, she might share her time in some way with several feature or product teams. Overall, she is responsible for defining where she will be, and not sticking to someone else's shallow notion of what a "tester" should do.

I loved the talk, perhaps because I too am a lone tester with a lot of freedom to put things together in whatever way I feel helps us. I tend to identify all sorts of things that don't seem to be getting done, making some of them visible and doing some of them myself. There's no task I wouldn't be allowed to do.

When Jess talks about context-switching, my brain right away decides she does not mean that, but she means small batches. Finishing one small thing before taking another small thing. Not interrupting in the middle and ending up trashing. But the natural state where you can do something and then it's done. There will be more like it, but they won't be the same.

Learn to love small batches. Releasing daily to production is the most liberating experience I have. Small changes are smaller risks and smaller to test. And while testing is never done, delivering things over warehousing investments without returns is rewarding on its own.

I feel that if you know how to create small batches - for you and for people around you - you'll do great as a lone tester.

Tuesday, February 23, 2016

Testers see things and should speak up!

I've been a tester and a software catalyst for two decades. A lot of time the discussion around my kind goes around feedback, and a particular kind of feedback is bugs. By finding out about them, we react to them and make the world together a better place for our end users and customers.

There was a particular lesson through that I took at heart quite a while ago, long enough that I think of it as common sense. When all that comes out of you is bugs and problems, there's a dire need of balance. Don't just share the bad, share the good too.

Remember to say out loud when you're positively surprised. It could be that you did not find as many bugs or any, or that the ones you had to find were different in profile than what you would expect. It could be that you notice that things change and improve. Recognize, acknowledge and share the positive things you see.

I try to live to this in my at-office life and in my online life on twitter: say thanks, share excitement, promote others. The latter has been a learning route for me but we all do better when we all do better.

Early this year, I learned about yet another way of showing appreciation in a very public way. There's a new competition in Finland around promoting the good we do in software, in 7 categories called Blue Arrow Awards. Think of this as the Oscars of IT. The moment I learned of it, I asked for the 150 euros to submit my team into the competition - still struggling with the day of writing though, but committed to doing the service for my team.

For me, the acknowledgement did not stop there. This is me: 
I'm proud of my first candidate, one that would be unlikely to have been recognized regardless of his significant contributions in community. While he should win, my main hope is that his work gets a little more recognition. Winning is not the point, showing appreciation is.

Everyone needs someone to run the rounds for them. Will you do it for your team and your candidate? I'm happy to help. The good stuff needs to come out!
 

Monday, February 22, 2016

Testing the same old but with a new implementation

Many of us testers run into these occasionally: there was a feature that was done and tested a while ago. Then new information emerged and a complete rewrite in order. More often that not, there's a new understanding of requirements driving the change. There's often something new that needs to be implemented, that would be hard to extend on the old implementation.

Me and my team are in the final rounds of one of these changes. We had 3 reports (which are basically the main output for the users using our software) and we needed 3 new ones, which needed to be essentially different. And while discussing these 3, we realized that what we actually need is a report configuration engine, as the each of the new organizations coming into this is likely to have their own configuration needs even if the information would be same or similar.

With this rewrite, one of the shining stars in the team took the problem and turned from someone who could deliver incrementally into a protective person, focused on the technical structures. In hindsight, he was doing a lot of research on components he could use but it all appeared one big chunk. And digging him out the hole he dug was quite an effort.

At some point, all the 6 reports were all intertwined and needed to be released together. A discussion by discussion we untangled them, and delivered the 3 new first (already months ago) to learn what they would be in real use. We learned just few weeks ago they are missing an essential feature, so a second round of work on them is in order. And the 3 old ones, we found ways of delivering them one by one. After 3+1 out, we decided on changing the layout in relevant ways, and again agreed on the steps to deliver this further.

As we're now on final rounds, there's two reports left to test. I was feeling the testing was particularly straightforward in the pattern:

  • search for customer feedback on the old reports
  • pick any special data mentioned in the reports
  • generate a new and old report, and compare
I found many problems with the approach today. I learned that for a particular set of data and options, the report wouldn't generate at all. I learned that there was extra information and that some of our conciseness goals were nicely achieved. Over the day, I felt as if I was surrounded by a warm blanket: there was the old version, the version that worked "right". And it was so rare to have that that I realized I almost had forgotten how much that helps. 

Feeling too comfy, I decided to try something different, and cover combinations of options in more detail again. I made a list on paper of what I wanted to try, and to my surprise found more relevant problems of how things would misbehave. 

At the end of the day, I was reflecting back what I had done. How close it was that I called it done when I did what I had in mind test design wise. And how relevant it was to find the other way of approaching to learn what did not work. 

All it takes is finding the next stretch. But I love having the old system as an oracle. Why did I ever settle in not keeping the old installation available when I was newer to this? Can't imagine.  

Saturday, February 20, 2016

Planned vs. Emergent Doing

I've had a few months of more intensive self-reflection than usual, and usual level is pretty high level for me too. There's goals in life I'm being coached at, and one of my major goals is to find my focus. I'm all over the place, looking for the common thread and dropping stuff were the thoughts I started from.

I tried very hard to make a plan for myself, but plans rarely work out. When I plan to organize a meetup and put it in a calendar promising I will deliver a talk there, that works for things that can be event-driven. But there's other stuff too.

We had a time management session for the Software Testing Finland crowd, and the speaker there suggested to put stuff in calendar. Reserve slots for things you need to do. I tried that, and messed up my calendar completely. Because, most of the stuff I need to do does not have to follow a plan, but an inspiration.

When I test, I see what are the things I try to avoid (e.g. testing stuff made by a developer who fights back with everything I say) and I push myself to make space for those. I see what are things that I feel strongly about (e.g. fast feedback and making things available for production in shortest possible loops)a and I naturally give space for those. I see things that are short term and long term benefits, and I balance my progress to include a mix of each. I value helping others over doing my own "responsibility", but work to balance both into a mix.

When planning failed, what I did was that I listed the things in life I want to take forward. I ended up with 28 items, none of them are small, more like themes: "Change the world of conferences", "Startup dreams", "Teaching programming to mom's and daughters",  "Writing my blog", "Writing paid articles", "Exercise", "Family and friends" and so on. And instead of planning what I would do, I just let my plan emerge and document it.

For a month now, I've been doing stuff I feel inspired on for various reasons. I've followed my fears, I've carried my responsibilities towards others and been inspired by things I could not foresee but that fit my overall idea of life. My balance sheet keeps me true to my overall goal. It's my emergent plan, my anchor I can reflect against to see what themes I've driven forward. I've marked down each day which themes have had my attention, and I see things I've worked more and less on. And I can decide what to do with it, or if I just want to let it be as it is.

I don't believe in multitasking. But I believe in a continuous flow of delivered value. This is it for my personal and work life. I live in the moment and that is how I get all of the things done, 

Thursday, February 18, 2016

Changing the world of conferences: pay the speakers

I spent last night reading up links on speaking as part of your professional career and a few lessons I found particularly insightful:
 I set up a conference (European Testing Conference) to change the world of conferences. There's two changes in particular that I seek:
  1. Getting programmers and testers (and others) to work together on solving the testing challenge in spirit of dialogue, without fear. We need to actively mix testing as programmers know it and as testers know it, to find better mixes.
  2. Getting delivering valuable sessions in conferences to be treated as paid work instead of a favor in marketing. 
With this post, I'm focusing on the second change. The speakers should not have to pay to speak, and speakers should be paid to speak.

Reading the posts from others, I felt there's a post I need to add: one from a perspective of a conference organizer. Let's use European Testing Conference as an example. It's a bit different example in the sense that normally organizers need to also get paid, and with this one, organizers are not on salary.

The group of speakers + organizers is in total 27 people. It includes 4 organizers,4 keynotes, speakers with hands-on 1,5 hour workshops and speakers with 0,5 hour talks.

For sake of simplicity, let's assume every one of these delivers a session they've delivered before and that preparing a shorter talk session or a longer hands-on workshop would take only the same time as an hour talk . The amount to pay with Jenn's rule is 25*2000 = 50 000. If the talks were new, the sum would be 25*5000 = 125 000.

That's would be the fair way of running it, but that is a goal I quite can't reach. And worst from an organizer view, what if I had to commit to that level of payment in advance and I wouldn't get enough people?

The commitment to speakers we did with European Testing Conference is that they will not have to pay to speak - we will cover expenses, even with our own risk. And a risk it seemed to be, up until 2 weeks into the conference when we finally went from us paying to organize to being able to pay something to speakers and leaving a little for next year / other causes (*) we raise money for.

Lena's honorarium sums would end up with  3750 - 17500 to be paid as a conference, and we seem to be landing somewhere on this range this year on what we'll split to speakers. In future years, the goal is to move from this per speaker 150..700 range to 2000 ... 5000 range - sharing the financial risk and opportunity with the speakers.

We're also working on openness of accounting, so when we have numbers together, we'll share them. That should be a few more weeks into it. A change is starting, even if just with one conference.


(*) on other cause: we're going to be starting a fund over the years to pay speaker's travel expenses for other conferences and looking to partner on that with communities that support speakers. The world needs better and more diverse conference sessions. One experiment at a time.






Saturday, February 13, 2016

Two 1st timer mobbing mistakes

We run a few mobbing sessions (both programming (selenium & unit tests) and testing (exploring & identifying security holes) at European Testing Conference 2016 this week. As usual in conferences, there was also a lot of amazing eye-opening discussions, and one theme in particular was about people's experiences with mobbing.

With Llewellyn Falco, we summarized the two most common mistakes people seem to be running into when doing their 1st ever mob, and ending up with a bad experience.

  1. Long rotation
    When you start, use 4 minute timer to switch roles. The reason for this is simple: when you rotate fast, you get quickly used to changing perspectives between the driver and navigator. The rotation enables you to acquire skills related to working in this format, and keep everyone engaged. It also prevents getting stuck on one person and having one person dominate the mob a lot. It keeps the idea in a mob as opposed to in a single person. Iterations matter more than time. If you are going to spend an hour at the keyboard, it's better to have 10 iterations that to have one.
  2. Not using strong-style driver-navigator approach
    In strong-style, the person at the keyboard is doing no thinking. The rest of the mob navigates her through the task. When you don't use strong-style, the navigators have to reverse-engineer what the driver is thinking. Also, if you don't use strong-style, every time you rotate you have one person thinking and when you rotate, you have a new person thinking. When you use strong-style in a 5 person mob, you have four people thinking and the rotation causes no disruption to taking the task forward. 
There are many other aspects having a successful mob, but these are the two big ones. If you are interested in reading more on this, check out our Mob Programming Guidebook

A public reply: Collaborative Exploratory and Unit Testing Course in Brighton in few weeks


In just a few weeks, I'm running a course with Llewellyn Falco on Brighton "Collaborative Exploratory and Unit Testing". It's a wonderful course, and we've heard a lot of positive remarks on the previous run of it around TestBash New York, so if you are around, you can still join us

I'm a tester (who finds coding not so interesting and exploring the best thing in the world) and Llewellyn is a developer (who finds coding the best thing in the world and tries to turn anything testing into code), and on the course we test together, in a mob, an application to find insights thru exploration and then turn them into code. 

I just received a question I felt was appropriate to answer in public:

"I'm thinking of joining the training you have on March on Collaborative Exploratory and Unit Testing. I was wondering whether I have to attend as a pair with developer or can I attend on my own?"

The course has been set up so that we get the group together to work in a mob. One computer, a mix of people and specialty skills, including the skills that we, the two trainers, bring to the table. The one on the keyboard is not responsible for solving a problem or taking a task forward, but that is the position of resting and listening, when the rest of us give the guidance on what to do.

You can join the group as an individual programmer. It would do good and take your skills more towards what we see on hands-on software architects. Knowing testing makes you a better programmer, and the guidance we have available is very much practical. 

You can join the group as an individual tester. In addition to brushing up your exploration, observation and test idea generation skills, you will get a basic feel of things you can do together with developers. 

Any mix of testers and developers works for the session at hand, but more variety introduces more opportunities to collaborate with different kinds of people. 

We hope to see you in Brighton in just a few weeks. 

Friday, February 5, 2016

Join us for a training around TestBash?

TestBash 2016 is sold out (good job people, great choice!) except for people who are joining also the pre-conference training courses. Whether you care for joining the absolutely wonderful crowd of participants and speakers for the TestBash main event, I'd like to tell you that you should join the course I'm doing with Llewellyn Falco on Wednesday March 9th: Collaborative Exploratory and Unit Testing.

I wanted to give you a bit of background info on the course, with the hope that it would make you realize that you or a colleague of yours should be there.

I'm a really good tester, and I specialize in doing the work at my day job, and distilling it into a teachable format on the side. Llewellyn is a brilliant developer, and an agile technical coach teaching developers throughout the world. And the two of have a connection, that enables us to do something that rarely happens: we bring together the best of testing as testers know it (as performance) and the best of testing as developers know it (as artifact creation).

During our 1,5 years of collaboration, we've learned to cross-pollinate ideas from the two communities and put them together in a special pairing style called strong-style pairing and a special teaching / working style called mob programming. I believe both of these styles are transformative work the ways we work at the office, and the troubles we have as testers and developers - I can surely say they have been that for me and my developers.

This course creates you an experience to go through lessons we've gone through, to feel the amazement of all the things a professional tester sees when they look at an application, and how those insights we create, as we work closely with professional programmers, can transform into working and maintainable test automation that is worth having.

Shall we meet you around TestBash to work out on this together? You can sign up here.

Wednesday, February 3, 2016

Best ideas win when you care about the work over the credit

With all the twitter-buzz on debate, there was a gem among all the noise as in tweets that make me think.
My experience is, as a vocal person myself, that it's often that the best ideas don't win, they get hidden under the need to win a debate. This has come more and more clear to me being exposed to Mob Programming.

Meeting dynamics

Think of a typical meeting. You have a bunch of people, discussing a new feature, it's scope and implementation, things to consider and risks. It's probably a cross-functional team where people take a bit different viewpoints.

Let's look at the meeting from the point of view of game theory: how do you win in a meeting? If you go to a meeting and come away being successful, you must have said something. There might, for us testers, be just the right question to ask on risks that changes the whole design that made the whole painful 2 hours worthwhile and exhilarating. Especially since there was a time that many of us remember when we weren't invited and lack of our contribution made us fail big time. In meetings, it's the ideas we can think of that argue. We often have a debate of some sort, with the idea that the best idea should win. In a meeting, there's an incentive to have your ideas be adopted.

Even better. If your idea gets adopted, for most people in the meeting this means that you get to step away from the responsibility of doing the work. If it takes you an hour of a meeting to get your idea across, it can easily take a day or week for the idea to be turned into an implementation. So there's even more incentive to get your idea across there. The idea is credited, it was my idea even if it was your work.



Mobbing dynamics

In a mob, we work on one thing, on one computer all together. It's like a meeting, but the dynamics are nothing like a meeting.

In a mob, the incentive moves from coming up with the idea to getting the work done. A typical dynamic of achieving this is that we don't debate ideas, but we adopt them. If there's two ways of doing the same thing, let's do them both. And perhaps start from the underdog, the more quirky idea.  And you start to trust. Trust enables you to unlearn the concept that idea matters over implementation. Getting credit becomes a team thing. You will also see combinations of two ideas from two people. And emergence of a third, because both ideas you could come up with don't quite cut it.

In these cases, the incentive is very different: get the work done. So, people are more willing to work together, more willing to try things out, and when it is done, they are more willing to let it go rather than continue fighting.  It's important to care about the work and result, over caring about the credit.

Best ideas don't come out in debates. Best ideas come out in collaboration, when people feel safe. Loud and quiet alike.

On credit: while this stuff right now is something I can easily say, I know where I learnt it from. Thank you Llewellyn Falco for opening my eyes on better ways of contributing.

Stepping in on bad behaviors

It was one of these Tuesday afternoons, and nothing said it was supposed to be any different. There was the scheduled regular weekly meeting. And there was the work around the meeting, probably on yet another release and feature. Business as usual.

I was having a great time, socializing with my team on the work we needed to do. Or most of my team at least. But then the meeting starts.

At first it was all regular, checking in on what we're working on just like we do. And then we start talking of a specific area.

The specific area has been for years something we'd call a silo. I have recently been part of breaking that silo, and the code quality revealed makes me think of Matt Wynne's funny sarcastic talk on Mortgage-driven development. But as always when you do something like this, you bring out discomfort and pain.

So when I asked on progress and things I could help on in the area, I could anticipate all sorts of responses from the developer. And there it was again: the personal insults, the insisting of me being unreasonable and the wishes that I would just walk away and leave. Something has been learned though: the sexist remarks that completely strike me off balance are gone and just a memory that keeps me on my toes.

All I did in my perspective was to try to negotiate on the next testable increment, and the push back was harsh and personal. I can deal with it. But what surprised me was how my team of 8 others, including the project manager reacted. Most of them were looking into their toes, hoping they were invisible and pretending they did not hear anything. None would join, correct me if I was wrong or help us resolve the issue. I feel often alone at work, and I felt even more alone, realizing I'm the only one who goes face first into resolving a conflict, trying to talk it through and understand.

But this left me thinking of dynamics at work. Is it really normal that if you see something bad happening to someone you know, you rather stay away? I'm hearing it's a phenomenon that everyone waits for someone else to step in.

I asked my team why no one spoke up to learn they felt that I was doing fine by myself. That I did not need help. But a little support would surely have been nice.

I get similar feelings from some online arguments. So if you feel you're an outsider joining in, please do. It just makes it feel so much better in case there's really something threatening going on. 


Tuesday, February 2, 2016

Consent first debate

Recently, I've been putting a lot of thought into how I want to handle myself in the professional circles. It started with a friend from Agile Finland mentioning he sees me getting into these arguments on twitter, where there just isn't a winner. Everyone loses. Time. Peace of mind.

From that comment, I signed him up as my personal coach. I wanted to work on "mindful online presence" and so far only thing I've learned that I feel much better when I manage to step away from the arguments. To remember that I don't have to respond, even if I sort of initiated a discussion by venting on something like people saying things like "detrimental to our craft" on something I believe might just as well be taking things forward.

Stepping back from the discussions doesn't usually please people, they tend to seek answers on why would I do that. The way I think about it right now found words from a blog post by Marlena Compton, titled "Feminism in the Testing Bubble". My takeaway from that article isn't the feminism, but the idea of a tax
"There is a tax for people who are part of any marginalized group.  The tax requires that you will spend your time and energy not on the actual topics you care about and want to write about such as software, but that you will spend time and energy defending your participation in the space and your right to be there.  The tax is so far-reaching and insidious that you will end up paying before you even realize what’s happening."
"Payment comes in many forms:  your influence, showing actual emotions on twitter, a boss’s anger, exhaustion from explaining yourself (again) and then there are all of the requests people make of you to teach them because they don’t feel like finding answers for themselves."
I feel part of a marginalized group in context-driven testing. I don't want to stop saying "exploratory testing" or "test automation". I don't want to discuss the one true way. I don't want to build walls where you only hang out with people in one camp. And I don't want to spend my precious little time on trying to convince those who want things I don't want that my way is the true way.

I blog to share what I think. I write more for myself than for an audience. If any of it is useful, great. If it starts discussions checking first on consent it's great. When I refuse to invest my time, I would rather have people accept my choice, than tell me that I must discuss all things testing.

I'm not paying the tax anymore if I learn to avoid it. I want to talk with my peers on how we teach testing skills without RST models, in a very particular style: pairing/mobbing and slow change, an idea at a time intertwined with reflection. I want to find time for that, and I prioritize other things out so that I have the capacity.

Marlena sums it well:
" I don’t mind if people communicate with me to tell me how wrong I am about that, just don’t expect me to give you a cookie."
 "...it is ok for me to push back on taking responsibility for fixing things.  It is ok for me to voice a frustration or call someone out and leave it at that."
Could there be a lesson on importance of consent in debates to take from here? At least teaching/coaching without permission is considered more of a bad than good thing.


Thanks, but I still have test automation

There's another big buzz in the world of testing, as the allowed and acceptable words are being refined. This time, the word is automation.

I know testing cannot be automated for now. In its full features, it's a process of collecting empirical feedback on things from we know to expect to things we had no idea could exist. It's full of all kinds of wonderful activities, where a thinking human is in critical position. It's hard to talk about testing because your testing and my testing won't be exactly the same anyway. Why would it be such a bad thing when people talk about test automation meaning the parts they believe they can automate.

I also like to think of it this way. If programmers job in general is to automate things, to get started on that does not mean everything has to be automated. A robot that does heavy lifting for people is valuable, even if it only operates in the warehouse and does not deliver things at your front door.

Merriam-Webster defines automation as "the process of putting an apparation, operation or system under control of mechanical or electronic device". When we put parts of testing under control of electronic device (the tool that we need to attend to differently than if we did not have it), I'm doing test automation.

Having more words to explain the details would be helpful. What type of test automation are we talking about? Is it execution? Monitoring and reporting? Setting up data and environments? Which specific problem in the domain of testing we're trying to control removing humans at least partially? 

I've seen my share of wasteful test automation. I've seen it replace thinking where thinking is due. But I've seen that fixed by introducing experimentation culture and empirically assessing what worked for us. It can include failing with automation on the value it provides. We already have words to discuss that problem: value, opportunity cost, short-term and long-term. Many of us are having that conversation continuously.

A main principle I always remember of context-driven testing is that people are the most important aspect of any context. How about believing that people will look for solutions, and figure out what does good enough job for them without redefining anyone else's namespace. If using the word test automation makes me less of a context-driven tester, so be it. I just want to ensure the organizations I'm working with have the best chances of doing a good job. And good job looks different depending on organization and the day we're looking at it. Everyone is learning every day, including those of us who think they've already learned a thing or two.

** as for calling automators technical testers, no. There's technical testers who don't program but are intertwined into the system / environment technical aspects. Like saying people in IT departments wouldn't be technical.