Tuesday, July 31, 2018

Folks tell me testing happens after implementation

As I'm slowly orienting myself to the shared battles across more and more roles, throughout the organization as I return to office from a long and relaxing vacation, I'm thinking about something I was heavily drawing on whiteboards all around the building before I left. 

The image is an adaptation based on my memory of something I know I've learned with Ari Tanninen, and don't probably do justice to the original illustration. But the version I have drawn multiple times helps discuss an idea that is very clear to me and seems hard for others. In the beginning of the cycle, testing feeds development and in the end of the cycle, development feeds testing. 


There's basically two very different spaces in the way from idea to delivery. There's the space that folks like myself, testers, occupy together with business people - the opportunity space. There's numerous things I've participated in saying goodbye to with testing them. And it's awesome, because in the opportunity space ideas are cheap. You can play with many. It's a funnel to choose to one to invest some on, and not all ideas get through.

When we've chosen an idea, that's when the world most development teams look into starts - refining the idea, collecting requirements, minimizing it to something that we can deliver a good first version of. Requirements are not a start of things, but more like a handoff between the opportunity space and the implementation space.

Implementation space is where we turn that idea into an application, a feature, a product. Ways to deal with things there are more like a pipeline - with something in it, nothing else gets through. We need to focus, collaborate, pay attention. And we don't want to block the pipeline for long, because when it is focused on delivering something, the other great ideas we might be coming up with won't fit it.

A lot of time we find seeds of conflict in not understanding the difference of cheap ideas we can toy with in the opportunity space and the selected ideas turning expensive as they enter the implementation space. Understanding both exist, and play with very different rules seems to mediate some of that conflict.

As a lead tester (with a lead developer by my side), we are invited to spend as much of our efforts in the opportunity space as we deem useful. It's all one big collaboration.

Looking forward to the agreed "Let's start our next feature with a marketing text writing, together". Dynamics and orders of things are meant to be played with, for fun and profit. 

Stop thinking like a tester

I'm very much an advocate for exploratory testing, and yet I find myself seeking kind of what Marlena Compton seems to be doing in space of Extreme Programming and Pairing - seeking the practicality, inclusion and the voices that keep shouted down by the One Truth.

Whenever I find people doing good testing (including automation), I find exploratory testing plays a part. The projects lacking exploratory testing are ones that I can break in two hours.


So clearly the focus and techniques I bring into a project, as I apply them are something special.

In this particular project, some of the observations I shared lead to immediate fixes and easing things for whoever came after me. Some of the fixes (documentation) were done in mid-term timeframe, and looking at the documentation now, I don't want to test it, I want to write it better. And some of the fixes remained promise ware (making the API discoverable, which it isn't and the message was well delivered by making a group of people with relevant skills fail miserably with its use).

So sometimes I've found myself saying that I think like a tester. I do this stuff that testers do. It's not manual, so it must be the way I think, as a tester.

I've seen same / similar curiosity and relentless will to believe that things can be different in other roles too. My favorite group of like minded peers is programming architects, and I get endless joy in those conversations where I feel like I'm with my people.

So I came to a conclusion. Saying that we teach how to think like a tester is like brute forcing your thinking patterns on others. Are you sure the way the people think wouldn't actually improve the way you're building things, if you carefully made sure everyone in the teams is celebrated for their way of thinking.

I sum this up as this.
Be your own, true, unique self and help others do that too. Growing is a thing, but while growing, be careful to not force the good those people already have in the hiding.

It took me so much time to realize what things I do because they are expected of me and my kind, and what I do because I believe it is the right thing for me to do. Appreciating differences should be a thing.  Think your way.

Monday, July 30, 2018

The line between Exploratory Testing and Managing It

There's no better way of clarifying one's own thoughts than writing in a blog where one has given themselves permission to learn to be wrong. This is one of those posts that I probably would not write yet if this blog wasn't an ongoing investigation into the way I think around various topics.

A friend shared piece of feedback on what I might be missing from my "What is Exploratory Testing" article, and I cannot decide if I feel it is an omission or if it is how I structure what is what. What they shared is:
I believe that exploratory testing is a separate concept of managing exploratory testing.

Exploratory testing is the idea of skilled testing where learning continuously and letting the learning change next steps is the core. To manage something like that, you end up with considerations like what if you need to convince others that what you are doing is worthwhile beyond reporting discussions starters like bugs or questions? What if you're not given an area you work on by yourself but you need to figure out how to share that area with others?

When I've been trying to understand that line between doing it and managing it, I've identified quite many things some people find absolutely necessary for managing it  to a degree they would not be comfortable calling it exploratory testing without it. I've come to the idea that as long as we're not the testers who are like fridge lights only on when the door is closed with bug reporting, any structures around the days of work of tester are optional. They become necessary as there is a group rather than an individual.


For visibility and learnings from the testing I do, it's been years in doing exploratory  testing when no one cares in the detail I care. I find myself introspecting, looking at a wall or writing one of these blog posts at times when others did not notice anything was different between this time and another. Learning to learn, learning to critique your own way of doing things, identifying things you can do differently and diligently doing them differently are all parts of self-management within the "days of work" doing exploratory testing. 

Exploring in a group

There's such thing as low quality exploratory testing

I'm picking up weak signals, and one of those signals recently has been suggesting that exploratory testing isn't every tester's bread and butter.

First it was an organization that introduced the agile testing idea that developers test. This left testers of traditional background wondering what they would now contribute, and finding themselves unable to figure out what depth in  testing would look like. There was cries of dislike for exploratory testing, not knowing what they should do not to repeat the tests developers were already doing and realizing they were no longer able to find problems.

Then it was was an organization that tried out exploratory testing for a limited timeframe before it was time for the traditional test case lead manual testing. The testers were again filled with despair on needing more structure, and expecting the structure to emerge from somewhere outside themselves.

If and when there is something core to exploratory  testing in addition to learning, self-management is it. You need to be able to make your own plans, create your own structures of support by selecting from multitudes of examples and reflecting to your own results. Here's one example of what exploring with intent could look like.

The other side of the coin is that people who are not working in exploratory testing find themselves frustrated with the lack of challenge, having to manage and maintain tons of test cases and still getting continuously feedback of missing relevant bugs. Yet when they get out of it, they struggle if they cannot find the depth: the multidisciplinary nature of testing and the vast option of perspectives the others testing could still be missing.

Exploratory testing is skilled. It means the most common way of it showing up in projects is with low quality exploratory testing, and that has very little to add on top of what developers are already capable of doing.




Going Meta: Writing an Article about What Is Exploratory Testing

I'm working on my book: Exploratory Testing, published on LeanPub. LeanPub is a lovely platform, because I can publish versions of my book and the magic of people paying me for the work I'm still in progress of contributing is the best cure for procrastination as an author I know of. Also, paying for this book is a way of financially supporting all the work I put into defining testing and helping people learn it. Obviously, you can also get it for free.

There was a chapter I wanted to write, that was particularly difficult for me: What is Exploratory Testing?

My first version was a list of bullet points:

WHAT IS EXPLORATORY TESTING
  • more productive
  • better testing
  • multidisciplinary
  • intertwined test design and execution
  • difference to *manual* and *scripted* and *automated* 
  • starting from scratch vs. continuing on a product
  • product as external imagination
  • empirical, evaluation as intent
  • allows moving into scripting and out of it based on feeling - discretion of the tester centered
  • recognizing exploring based on what it gives you
  • it’s about HOW we do testing in a skilled way. Not when, on in what kind of process, or by whom. 
  • performance, improvisation, intentional
  • learning and modeling for multidimensional coverage
  • next test influenced by lessons learned on previous tests
  • can’t tell in advance which tests should be run or in particular how they should be run in detail
  • test cases documented as an output of testing
  • scripted approach takes ideas out of designers head and puts them on paper, and assumes people follow that as instructions
  • three scopes, many ways to manage
  • premature writing of instructions hinders intellectual processes
  • limited only by breath and depth of our imagination and willingness to go investigate
  • enable intake of new ideas into the work immediately
  • automation is a modern form of documentation
  • focused intent on what to evaluate, appropriate documentation
  • discover patterns, opportunities and risks
  • instead of pass/fail, “is there a problem here?”
I wanted to write something that Medium says you can read in five minutes. This is what I ended up with: https://medium.com/@maaret.pyhajarvi/what-is-exploratory-testing-88d967060145

So now you see the short and the long version. What should I leave out of what I wrote to include more of what I left out? 

Saturday, July 28, 2018

Code of Conducts and Stopping Bad Behavior

This post is inspired by two tweets that just run past me on my timeline.

First one was a suggestion that Code of Conduct exists to make underrepresented minorities more comfortable, sharing an example of rude mansplaining and dismissal of technical abilities.

Second one was an experience in unwanted male attention in a meetup making a woman not return. And pointing out there was no code of conduct, as well as sharing another experience of introducing contracts of not using mentorship relations in another meetup as a means of meeting people in romantic sense.

This stuff is all around. It is structural and perpetuated by all genders to an extent. Women are almost as bad at assuming that your programming skills vanish to think air now that you became a manager when the opposite could just as well be true. We hold a belief, strongly, that for our careers sake we should not be actively sharing stuff around minority discrimination. And we really don't understand that because all of this is structural, it is not intentional that I am a racist, but I need to really pay attention to the belief systems that sometimes make people who feel really safe with me to point out my misbehaviors.

The tech world tries to solve some of these issues by code of conducts, which could be a mechanism of informing and creating agreements on what is appropriate and what not. To protect  the underrepresented, sometime it feels like oppression to the majority view. But hateful, mean comments close folks out. Keeping that shit to yourself may close you out, but that is then your choice.

What I wanted to write about though was a conference I was involved in, one that tries hard to make it a safe place. They've been very successful in the step 1 of making it safe: equal representation of interpreted binary gender. Their speaker roster models what gender and often also race in this industry should look like. They are a tech conference where the feel of it is that binary genders are equally represented. And that comes with hard work they should feel proud of.

Equal representation as step 1 is important in the sense that there majority of people play nice together. The parties are fun, and I don't sense a need of being overly careful. Atmosphere is normal, everyone feels to be on generally good behavior. People mix, people talk. And sometimes people consensually hook up.

The conference has had a code of conduct for a while, and last year they also introduced extra mechanisms of enforcing it. That is where I got involved. And it turned out to be an awful experience for me.

It started off with someone pointing out that a talk title made them uncomfortable. I had volunteered to represent, so I looked into it. I talked to the speaker. I knew the contents were ok. But my judgement was that the title should not have been accepted to protect minorities I did not identify myself with. Nothing changed, except that I gave up on my extra responsibilities because they violated my sense of justice with "can't fix a mistake at this point".

Even without the extra duties, people would now consider me someone to escalate issues to. In the middle of the night, I find myself pinging the conference organizers to sort out their own stuff, with little success. The incident this time is some ass grabbing triggering bad old stuff, and having to deliver the message to the person that while they might think they did not do anything, I'm saying that none of this stuff can happen no more and that I will not be telling them who reported them. Not the most joyous of my nights.

The reason I write about this is to say that even in the best of conferences, stuff happens. Code of conducts are only as good as the people enforcing them. People enforcing them need to take deep looks inside their value systems, learn to look empathetically at underrepresented groups they are not part of, and bravely address issues as they come. Questioning people's negative experience has a term: victim blaming.

So for my own conference, I ask people to be kind and considerate. And while I do have a code of conduct, I know that my enforcement of it matters. I've needed to tell a dear friend that their jokes are inappropriate when they perpetuate the programmer - tester divide. I've needed to tell a speaker that many of their references are known assaulters, that they're all men and that the only woman in the whole slide deck was presented as a laughing stock, and that they used ableist language discussing autism. The stuff I do is educational, but they are also unintentional violations of the spirit. I remember the moment of having to tell this to a speaker. I remember their surprise. I remember them exclaiming how we were the first to ever tell him this even if thousands of people have seen the talk. Even if I had seen the talk before. And I remember the thank you.

We all need to learn more about this stuff. And when we see things happening, when we are in position of privilege, we can step in and help.



Friday, July 27, 2018

Three cool recipes to bring exploratory testing to stage

Going into the fourth year of European Testing Conference, where one type of testing I've wanted people to teach practical lessons on is exploratory testing, I find myself still in need of actively convincing skilled and awesome testers to teach testing instead of the things around it.

I'm all for agile testing yet I see it as mostly discussions around testing (who can/should do it, what size of chunks we do it in, how can we do more of it earlier and continuously) - the person doing testing and looking at the application is largely left at their own device.

Exploratory testing is when you pop into a design meeting, what are the questions you choose to ask, problems you choose to pinpoint? It is when you sit next to a developer to pair on programming, what are the problems you pinpoint immediately, what you keep track of for after the pairing, what you know you will need to spend private time on in an environment that would feel it slows you down while pairing if you initiated the move.  It is when you listen to people, read documentation and plan for the hands-on time learning with the application, to empirically figure out what you really know and don't know.

I want to see more of stuff on how to really do that. And I know it isn't easy.

On year one of European Testing Conference, I got on the stage myself to deliver a demo talk on Exploratory Testing. I called it "Learning in Layers - A Demo of Exploratory Testing", and started my session off my removing my personal access to the keyboard by inviting just someone I had never met from the crowd to be my hands. For an idea from my head to the keyboard, it needed to be spoken out as intent, followed through with location and details on where in the screen I wanted things to happen. This style of pairing is called Strong Style Pairing, and my demo pair was awesome on speaking back in questions, pointing out things I wasn't seeing.

On year two of European Testing Conference, I convinced Huib Schoots to do a practical exploratory testing session. His version was called "Testopsy" and it was a fun session where the audience first listed what activities we expect to see while someone is testing, and then mapping out which of those activities we were actually seeing. If you need a vocabulary for the activities, Exploratory Testing Dynamics list gives a nice basis for that.

On year three of European Testing Conference, I had Alex Schladebeck step up to the challenge. Her version was a show and tell, with deep insights of what the audience can take away from seeing others.

So, here's three recipes:

  • Do it strong style paired so that everything must be spoken out loud
  • Focus on the activities that get intertwined so that you can develop skills in each activity not just the umbrella term
  • Show what you do and what it finds with a real live in production application
I've since added a fourth recipe I like showing: show how you explore through creating automation. I took that version on stage at Agile Testing Days USA and Selenium Conference India. 

What is your recipe? Come tell us in European Testing Conference Call for Collaboration


Refining a 34 year old practice

Exploratory testing is a term that Cem Kaner coined 34 years ago  to describe a style of skilled testing work that was common in Silicon Valley, uncommon elsewhere. When the rest of the world was focusing on plans and test cases and separation of test design and execution, exploratory testing was the word to emphasize how combining activities (time with the application) and emphasizing learning   continuously about the application and its risk created smarter testing. The risks exploratory testing is concerned of are not limited to just the application right now, but everything the application goes through in its lifecycle. Automation of relevant parts of tests was always a part of exploratory testing, as the tangible ideas of what to automate next are a result of exploring the application and its risks.

There are a few things in particular that refine what exploratory testing ends up looking like in different places:

  • Testing skill
  • Programming skill 
  • Opportunity cost
  • Outputs required by the domain
Testing skill

Testing skill is about actively looking at an application in a deliberate way of identifying things worth noting in multiple dimensions. It's about knowing what might go wrong, and actively making space for symptoms to show up and building a coherent story of what the symptoms indicate and why that would be relevant.

The less ideas people have about how we could approach an application for testing, the easier job they feel they have at their hands. Shallow testing is still testing.

Programming skill

Programming skill is about identifying, designing and creating instructions the computer can execute. It's about making a recipe out of a thing, and using computer to do varying degrees of the overall activity. When applied with tests, it leaves behind executable documentation of your expectations, or enables you to do things that would be hard (or impossible) do without.

Computers only look at what they're programmed to look at, so the testing skill is essential for test automation.

Opportunity cost

When testing (or building software for that matter), we have a limited amount of effort available at any given time. We need to make choices of what we use the effort on, and one of those choices is to strike a personal and team level balance of how we split the effort between tests worth trying once and tests that turn out to be worth keeping, documenting and/or automating.

We strike a balance of investing into information today and information in the future. We find it hard, if not impossible, to do both deep investigative thinking with the real application and maintainable test automation at the same time. But we can learn to create a balance with time boxing some of each, intertwined in a way that appears as if there was no split.

Outputs required by the domain

Sometimes exploratory testing produces discussions initiated around potential issues. Other times those discussions are tracked in a bug tracking tool and bug reports are the minimum visible output you'd expect to see. And sometimes, in domains where documentation as proof of testing is a core deliverable, test cases are an output of the exploratory testing done.

Some folks are keen on managing exploratory testing with sessions, splitting the effort used into time boxes with reporting rules. Others are keen to create charters for making it visible what time is used on in agile teams as a means of talking / sharing what is the box of exploration.

Your domain defines what outputs look like in scale from informal to formal.


All skilled work relies on availability of that skill. Exploratory testing is an approach, not a technique.


Monday, July 23, 2018

Life after GDPR

In the last year, offices around the world have been like mine, buzzing with the words GDPR. The European Global Data Protection Regulation became active and enforceable in May.

It's not like privacy of our users did not matter before. Of course it did. But GDPR introduced concepts to talk around this in more detail.

It assigned a monetary value of not caring that should scare all of us. An organization could be penalized with a fine of 4% of the companies global annual turnover or 20 million euros, which ever is greater.

It introduced six requirements:

  • Breach Notification - if sensitive data gets leaked, companies can't keep this a secret. And to know that there was a breach, you need to know who has been accessing the personal data. 
  • Access to Your Data - if anyone has data of you, asking should give it to you, without a cost.
  • Getting Forgotten - if the original purpose you consented changes or you withdraw your consent, your data needs to be removed.
  • Moving Your Data - if you want to give your data elsewhere, they should provide it in a machine readable format.
  • Privacy by Design - if there's personal data involved, it needs to be carefully considered and collecting private data in case isn't a thing you can do.
  • Name Someone Responsible - and make sure they know what they're doing.
Getting ready for all of this has required significant changes around organizations. There's been needs of revising architectural decisions like "no data should ever be really deleted". There's been refining what personal really means, and adding new considerations on the real need of any data belonging into that category. 

In a world where we build services with better, integrated user experience, knowing our users perhaps with decades of knowing their personal patterns and attributes, we are now explicitly told we need to care. 

So as a tester, looking at a new feature coming in for implementation, this should be one of your considerations. What data is the feature collecting, combining to, and what is the nature of that data? Do you really need it, and have you asked for consent for this use? Did you cover the scenarios of asking for the data, moving the data or actually getting the data deleted on request? 

For us testers, the same considerations apply when we copy production data. The practices that were commonplace in insurance companies like "protected data" are now not just for the colleagues data, but we need to limit access and scramble more. I suspect the test environment have been one of the last considerations we addressed with the GDPR projects in general, already being schedule challenged just to get minimally ready. 

We should have cared before, but we should in particular care now. It's just life after GDPR came to action. And GDPR is a way of decoding some rules around agency of individual people in the software connected world. 




Sizing Groups Based on Volume

I've been reluctant to read my twitter timeline for the last few days. The reason is simple. There is an intensive discussion going on between two people I follow and respect, and the way they discuss things looks to me like they are talking past one another and really not listening. There was a part of the discussion I could not avoid though and it was around a claim that:
NoEstimates crowd is a small loud minority.
I don't really have measures, and I definitely don't have enough interest to invest my time into measuring this. But I'm part of that claimed minority, I just generally don't feel like being loud about it. And I most definitely don't want anything to do with the related discussion culture of meanness, shouting, and insults that seem to be associated with that hashtag. The people against NoEstimates come off as outright mean and abusive, and I've taken some hits just by mentioning it - learned my lesson really soon.

You can be loud on a conference stage, with your voice amplified as you're given the big stage. I listened to one popular speaker this spring, who used a significant portion of their talk on ridiculing people in the No Estimates space mentioning them by name, and felt very uncomfortable. It could be me they are ridiculing and I don't think ridiculing even when it makes people laugh hard in the talk is the way we should be delivering our messages, no matter how much we disagree.

Y'all should know that volume is not how you size up a group. And that size of the group shouldn't matter because there should be places where it is ok to do things in a different way without feeling attacked. It's my right to be stupid in ways I choose.

I've been seeking for options of wasting my time on estimating for more than 10 years. You could claim it is about me being bad at it, or not trying all the awesome things. I've been part of projects that did Function Point Analysis and I still think that was awful. I've been part of projects that did work breakdown structures, estimating each item we broke into, but the problem with that is that the items remain vague and instead of supporting a continuous delivery of most important part of value, they have created me big chunks of work hard to split value-wise. We've used past data of all sorts. And I've wasted a big chunk of my life creating stuff that I don't believe to be of any value, just because someone else thinks it will be good and I'm generally open to experiencing everything once.

The question about NoEstimates for me boils down to opportunity cost. What else, again, could I get with the time used on estimating? Are there options that would be better?

15 years ago one Friday afternoon, I sat down with my bottle of diet coke at the office coffee table. I had two colleagues join me with their coffee cups. One of them started the discussion.

"I'm going to be doing some overtime during this weekend", they said. "The product owner needs estimates on this Chingimagica Feature, and they need it for a decision on Monday", they continued.

We looked at the fellow, excited on the Chingimagica Feature, willing to sacrifice a major chunk of their weekend and almost unanimously quoted "Sustainable pace" and the general idea that giving up your weekends was almost always a bad idea. But they didn't mind, this was interesting, important and they had all the info that the product owner would need.

So we made a joke out of it. We took out post-it notes, and the other two of us wrote down the estimate for Chingimagica, each on a post-it note. We did not show our notes to the others, but just said we'd hand them out on Monday when their work of weekend was done.

They used 10 hours to create a detailed breakdown structure, and analyze the cost.

Monday came, and we compared. We all had same estimate. They had more detail to why it came to what it did, but it was still the same.

That was when finding better ways of working became evident.

It is ok that some people feel strongly about estimating, and some of them may be very successful with them. I see the morale decline in projects close to me that focus on estimates over continuous delivery, and feel I need to help us stop paying relevant money for something that hurts people.


Saturday, July 21, 2018

Mob Testing is Different to a Bug Bash

When I introduce Mob Testing (all of us testing together on one computer), I find myself in a place where people who are not doing mob testing say they are. Often in asking questions, what they are doing is some form of working together in a crowd. 

A group testing together on a number of computers is usually called a bug bash. What characterizes a bug bash though is that not everyone in the group is working on the same thing except on the very high level of "we are all doing testing of a feature". The actual intent is not shared, the scenario is not shared.

Another form of crowd that people feel like identifying to be the same as mobbing is anything with a group and a single computer.

There's the version where one works and the others watch without really contributing other things than pressure. 


There's the version where one works and others more like make a point of not being in any way necessary - the "great, I get paid to not do anything" version of it. 


And there's the version where 200 people drive and one navigates, which out of these cases is closest to a mob but not one. This picture is an actual image of final testing of the Finnish Parliament in-session system. The testing took place so that the leader in the lectern outside the picture would call out a step of what everyone was doing, and each of the places had someone to do that on command. For this to be mobbing, there would need to be more minds on the hardest problem of what to test, over the extension of keyboard.



So when you think whether your group activity is mob testing / programming or not, here's my rules of thumb:

  • Are you all working on a shared intent, so that when you rotate and switch roles, anyone in the group is in the same context so that the same work can continue? (yes and -rule)
  • If you have multiple devices you need multiple hands on (multi-driving), is the overall group hands off the devices still in control of the work that happens on those devices? For an idea to the computer, it must go through someone else's hands. 
  • Is everyone in the group either learning or contributing through active participation? 
The connection mobbing creates tends to be tighter than with other group activities. 



Thursday, July 19, 2018

Skipping Ahead a Few Steps

I work with an agile team. I should probably say post-agile, because we've been being agile long enough to have gone through the phases of hyper-excited to the forbidden-word and real-continuous-improvement-where-you-are-never-done. But I like to think of agile as a journey not a destination. And it is a journey we're on.

The place we're at on that journey includes no product owner and a new way of delivering with variable length time boxes, microreleases and no estimates. It includes doing things that are "impossible" fairly regularly, but also working really really hard to work smart.

Like so many people, I've come to live the agile life through Scrum. I remember well how it was described as the training wheels, and we have definitely graduated from using any of that into much more focus in the engineering practices within a very simple idea of delivering a flow of value. I know the whole team side of how we work and how the organization around us is trying to support us, but in particular I know how we test.

This is an article I started writing in my mind a year ago, when I had several people submit to European Testing Conference experience reports on how to move into Agile Testing. I was inspired by the stories of surviving in Scrum, learning to work in same teams with programmers, but always with the feel of taking a step forward without understanding that there were so many more steps - and that there could be alternative steps that would take you further.

The Pushbike Metaphor

For any of you with a long memory or kids that job your memory, you might be able to go back and retrieve one about the joy of learning to ride a bike. Back in the days I was little, we used training wheels on the kids learning biking, just to make sure they didn't crash and fall as they were learning. Looking around the streets in the summer, I see kids with wobbly training wheels where they no longer really need them, but they are around for just in case, soon to be removed as the kids are driving without them.

Not so many years back, my own kids were at an age where learning biking was a thing. But instead of doing it like we always used to do with the training wheels, they started off with a fun thing called pushbike. It's basically a small size bicycle, without pedals. With a pushbike, you learn to balance. And it turns out balancing is kind of the core of learning to bike.

My kids never went through the training wheels. They were scaring the hell out of me on going faster than I would on their pushbikes, and naturally graduating to just ones with added pedals.

This is a metaphor Joshua Kerievsky uses to describe how in Agile we should no longer be going through Scrum (the training wheels) but figure out what is the pushbike of agile taking us to continuous delivery sooner. Joshua's stuff is packaged in a really nice format with Modern Agile. It just rarely talks of testing.

People often think threes only one way to learn things but there are other, newer, and safer ways.

The Pushbike of Agile Testing

What would the faster way of learning to be awesome at Agile Testing then look like, in comparison to the current popular ways?

A popular approach to regression testing when moving to agile is heavy prioritization (doing less) and intensive focus on automation.

A faster way to regression testing is to stop doing it completely and focus on being able to fix problems in production within minutes of noticing them, as well as ways of noticing them through means of monitoring.

A popular approach to collaborating as testers is to move work "left" and have testers actively speak while we are in story discovery workshops, culminating into common agreement of what examples to automate.

A faster way to collaborating is mobbing, doing everything together. Believing that like in an orchestral piece, every instrument have their place in making the overall piece perfect. And finding a problem or encompassing testers perspectives in everyone's learnings is a part of that.

A popular approach to reporting testing is to find a more lightweight way, and center it around stories / automation completed.

A faster approach to reporting testing is to not report testing, but to deliver to production in a way that includes testing.

A popular approach to reporting bugs is to tag story bugs to story, and other bugs (found late on closed stories) separately to a backlog.

A faster approach is to never report a bug by an internal tester, but always pair to fix the problems as soon as they are found. Fix and forget includes having appropriate test automation in place.





Diversity of thought requires background differences

I care about diversity and inclusion. When I say that, I mean that I want to see tech and testing in particular to reflect the general population in its variety and conferences lead the forefront of the change in that equal opportunity. Looking from within my bubble, testing is already the area where there are a lot of women and seeing testing conferences that struggle with finding one woman worth giving stage (as some of them phrase it) is just lazy.

The diversity and inclusion means more than (white) women like me. I want to learn from people who are not like me or the men I've been strongly guided to learn most from through other people's choices. When people are different, their backgrounds and experiences are different, what is easy and hard is different and they end up learning new insightful ways of teaching from the platform they have created for themselves.

Some people seem to like saying they want to see diversity of thought in conferences as a way of emphasizing that they don't want to care of diversity and inclusion in the way I look at it, recognizing that the platform you are teaching from, the person you've become through your experiences is an essential part of being able to really have diverse perspectives.

Diversity of thought could mean:

  • I want conference talks where people tell me that the foundation of my beliefs is off even if it isn't so that I think about it.
  • I want talks of topics I have not yet heard, or really insightful ways of delivering a message I feel is important enough so that I could learn from how that message gets passed on
  • I want people who I feel can add something to what I know (even if usually it happens 1:1 discussing) 
I find real diversity and having a representative crowd of speakers worth focusing in conference design. Given any deeply technical or insightfully hard human topic where you can name a man, I can name a woman or a non-binary speaker who can deliver the talk with experiences the men can never speak of. I have a lot of work on recognizing the people of color due to my limited exposure, but I'm working on it. And I still work on being able to for any given international expert name a local expert.  

Representation matters. You need to see someone you identify with to believe you can go there. To get the general population of talent into the software development world, it is not enough to share the white men talking heads, but actively show a representation of what the world needs to look like.

It shouldn't be hard to have top-notch speakers for 10 slots in a conference when selecting from a pool of millions of candidates. Yes, there's a lot of people out there who feel they need to work twice as hard to get half the results. How about conference organizers working twice as hard identifying them and supporting places (like SpeakEasy for testing) that support those people starting off on a fast track to awesome speaking.

Finally, back to diversity of thought I wanted to add that I find that is a catchphrase not founded on reality. In the last three years of conference organizing, I have spoken through 200 proposals a year, totaling now at 500 since I'm half way through this year. There is no diversity of thought as such that I can see. There's diversity of topics and experiences, a lot of insistence of using the right words but overall we mostly share a vision of what good testing looks like from perspectives of testers and developers. Every one of those stories is worth a stage. But some of those stories are refused a stage because they require work before they are ready, others because there's someone who can deliver similar story with a different background, and others just because there is not enough space period.

In the same timeframe, I've spoken in tens of conferences a year and used the conferences in hearing other people talk. So I can probably add 15 a year, with 10 talks each - 450 more samples.

In addition, I volunteer for various other conferences that do traditional call for proposals as a reviewer. That adds more.

The datapoint that I have are the people who submit. I'm personally a datapoint that does not submit. I have given stage to many many people who did not submit - some that never did a talk before. I found them amongst participants. That's where the real stories I need to get out there are. 

Tuesday, July 17, 2018

Pay To Speak - why care and other collected thoughts

I hold a strong belief that the world of conferences needs to change so that it is not #PayToSpeak. This means that while you are not necessarily making money for speaking (that should happen too), you are not paying money out of pocket for speaking.

Recently I spoke at a big conference that has no call for proposals but invites all speakers and says to pay their travel, including the following weekend's hotel nights to enjoy the beautiful city the conference is at. They have a policy of checking in with them on bookings or them doing bookings for you. So when they did my bookings for flights, they had me leave home 3:30 AM and return home 2:00 AM. That meant no public transport available to the airport. I could drive and park there, or I could take taxi. I chose the latter as cheaper option. When I arrived after 8 hours of being stuck on travel for something that would be a 2 hour direct flight, I had a taxi pick me up. However as I needed to get out immediately after my scheduled talk to still almost miss my flight, I had to find (and pay) the taxi myself. The flight tickets did not include any luggage, so I ended up paying 80 euros just to bring my stuff with me. Packing so that I can take it all inside means compromises on cosmetics I would rather not do while having to feel presentable. That's one of the women's problems, I guess.

The extra costs totaled to 180 dollars, which was more than the cheap flight they found me. Their view was that they wouldn't pay and that was yet another nail in the coffin killing my speaking willingness. Now it looks like they might pay, but I believe when the money is on my account.

So being against #PayToSpeak means that I believe that while it is reasonable to ask speakers to be considerate of costs (no business travel), it is not reasonable to optimize in ways where you just transfer the costs to them.

To be clear, many conferences are #PayToSpeak. Most in fact in the field of testing. A little less in the field of programming. A little less in the field of testing now that we are getting to a point of being respectable industry (including automation).

Why should the conferences care?

We've seen examples like Selenium Conference moving away from #PayToSpeak. The results they report are amazing.

  • 238 % increase in number of submissions - there are a lot of people with great content that cannot afford to teach in conferences that are pay to speak
  • new subjects, new speakers, new nationalities and new perspectives - all valuable for us learning in the field
  • percentage of women submitting growing from 10% to 40% - showing that the pay to speak might disproportionately impact underrepresented groups ability to teach in conferences

Surely you don't mean that we should pay newbie speakers?

I find that #PayToSpeak conferences present their audiences three groups of speakers:

  • Those so privileged financially that paying isn't an issue
  • Those with something to sell so that their companies pay them to deliver a sales pitch in disguise. Some better disguised than others. 
  • Those dreaming of public speaking experience finding no other options but paying their woes, sometimes delivering great talks. Believing paying is temporary. 
I believe that people in first category can opt out of payments in a conference that pays the expenses. In my experience they rarely do opt out on the out of pocket money, but many have opted out on profit sharing to leave money behind to support new speakers. People in the second category might become sponsors and pay more than just expenses to attend, and have their sessions branded as "industry practice / tool talks" which is often a way of saying it's selling a service or a tool. 

The third category is what makes me sad. 
This should not be the case. We should pay these speakers expenses. Our audiences deserve to learn from them.

As conference organizer, the thing I'm selling is good lessons from a (insert your favorite positive adjective) group of speakers. There are other ways of managing the risk of bad speakers than making sure your audience only gets to listen to the financially-well-off-and-privileged segment.

You could ensure the speakers with less of reputation get support and mentoring. With things like Speak Easy and folks like myself and Mark Dalgarno always popping up to volunteer in twitter, this is a real opportunity for conferences as well as individuals.

For Profit and Not For Profit

Finally, these words don't mean what you think they mean around conferences. You can have a not for profit conference with really expensive tickets and they can pay the speakers (this is the vision I build European Testing Conference towards - making real money to pay real money to speakers within a not for profit organization using profits to pay other conferences speaker's travel). You can have a for profit conference with ridiculously small cost and they still pay the speakers (Code Europe was one like this - they made their money out of selling participants to sponsors in various ways in my perspective).

Speaker's choices of where they show up with great material matters. But most of all, participants using money on the conferences matter most. Pay for the good players. Be a part of a better world of conferences.



Sunday, July 15, 2018

Testing does not improve quality - but a tester often does!

Being a self-proclaimed authority in exploratory testing, I find it fun when I feel the need of appealing to another authority. But the out of the blue comment the awesome Kelsey Hightower made today succinctly puts together something I feel I'm still struggling to say:  Testers do their work to save time for stakeholders next in chain. 
Actually, nothing in this tweet says that you need a *tester* to do this. It just refers to highly intellectual and time consuming activity, which to me implies that doing something like that might take a bit of time to focus.

With the European Testing Collaboration Calls, I've again been privileged to chat with people who trigger my focus to important bits. Yesterday it was someone stating their observation very much in sync with what Kelsey here is saying: for many of the organizations we look at that go for full hybrid roles, it turns out that the *minority perspective* of the exploratory testers tends to lose in the battle, and everyone just turns into programmers not even realizing why they have problems while in production on scale beyond "this is what we intended to build".

Today in prep for one of the Collaboration Calls, I got triggered with the sentence "testing does not improve quality". I sort of believe it doesn't, especially when it is overly focused on what we intended to build and verifying that. The bar is somewhere and it is not going up.

But as a tester, I've lived a career of raising the bar - through what I call testing. It might have started off like in one organization that 20 % of users were seeing big visible error messages, and that is where the bar was until I pointed out how to reproduce those issues so that the fixing could start. But I never stop where we are now, but look for the next stretch. When the basics are in place, we can start adding more, and optimizing. I have yet to find an organization where my tester work would have stalled, but that is a question of *attitude*. And that attitude goes well with being a tester that is valuable in their organization.

How do you raise the bar through your tester (or developer) role?

Feeling Pressured to An Opinion

It was one of those 40 collaboration calls I've been on to figure out what is people's contents that was special because of the way it ended up being set up. The discussion was as usual. There were a few of us having a discussion on an intriguing topic. The original topic was way too big for a single talk, and we collaborated on what would the pieces of focus look like that we would consider. 30 minute talks about everything between life and death don't do so well and don't end up selected, so we always seek something in a call that really has a fighting chance.

As I ended the call, a friend in the room expressed they had been listening in saying "You can't seriously consider you'd take *that* into the conference program".

I was taken back but was stupid enough to budge under pressure to confirm that I wasn't seriously thinking of it. Even though I actually am. I always am.

Because of my failure to - again - stick up to what I believe and let a verbal bully run over me, I felt like I wasn't true to my values. But I also learned that while in the moment I may be run over, I always come back to fix what I did wrong. So a few hours later, I expressed my annoyance of the style of communication, and recognizing the added bias of this negative interaction, I'm more carefully taking the advice of the two other excited co-organizers I had on that call.

Looking at the interaction a little more was funny in the light of a discussion a few hours later on how  awesome visual thinking strategies (a lesson by Lisa Crispin) are in identifying when you're doing an observation and when you're doing an inference.

Observations are the facts. The exact words the person would use in a call. What you can visually verify. What you can sense with your senses without adding judgment.

Inferences are not facts. They are your observations mixed with your experiences. They are your biases at play. And the reason we teach the difference in testing (and try to practice it), is to recognize the difference and seek fairness.

What the friend didn't see fit in the conference program was only unfit through their biases and experiences of when people are collaborative and when they are pushy. There's room for pushy people like them pressuring me to opinions I need to come back defending, so I'm sure there's room for all sorts of people.

Software is easy but people are hard. And combination of the two is fascinating.

Saturday, July 14, 2018

All of us have a test environment

This post is inspired by a text I saw fly by as I was reading stuff in the last hours: All of us have a test environment, but some of us are lucky enough to have a production environment too.

Test environments have been somewhat of a specialty of mine for the 25 years I've spent in testing, yet I rarely talk about them. So to celebrate that, I wanted to take a trip down memory lane.

Lesson One: Make it Clean

I started as a localization tester for Windows applications. After a few decades, I still remember the routines I was taught early on. As I was starting to test a new build (we got them twice a week back then), I would first need to clean my environment. It meant disk imaging software and resetting the whole of Windows operating system to a state close to factory settings. Back then it wasn't a problem that after doing something like this, you'd spend the next hours in receiving updates. We just routinely took ourselves to what we considered a clean state.

As you'd find a problem, the first thing people always would ask you was if it was tested on a clean machine. I can still remember how I felt with those questions, and the need of making sure I would never fail that check.

Lesson Two: You Don't Have to Make it Clean

Eventually, I changed jobs and obviously took with the a lot of the unspoken attitudes and ideas my first job had trained me in. I believed I knew what a test case looked like (numbered steps, y'all!) to an extent that I taught university students what I knew ruining some of them for a while.

I worked on another Windows application, in a company with less of a routine on cleanliness of the test environments, and learned a lot about the fact that when environments are realistic as opposed to clean, there's a whole category of problems of relevance that we are finding. It might have made sense to leave those out as long as we were focusing on localization testing, but it definitely did not make any sense now that I was doing functional testing.

I realized that we were not just testing our code, but our code in an environment. And that environment has a gazillion variables I could play with. Clean meant less variables in play and was useful for a purpose. But it definitely was not all there was.

Lesson Three: The Environment is Not My Machine but It's Also My Machine

Time moved forward, and application types I was testing became more varied. I ended up working with something I'd label as client - server and later on web. The Client - Server application environments no longer were as much under my personal control as the Windows applications I started off with. There was my machine with the client on it, but a huge dependency to a Server somewhere, often out of my reach. What version was where mattered. What I configured the Client to talk to mattered. And I learned of the concept of different test environments that would be under fairly regular new deliveries.

We had integration test environment, meaning the Server environment where we'd deliver new versions fairly often and that was usually a mess. We had system test environment, where we'd deliver selected versions as they were deemed good enough from whatever was done with the Integration test environment. And we had an environment that was copy of Production, as most realistic but also not a place where we could bring in versions.

For most people, these different environments were a list of addresses handed to them, but that was never my approach. I often ended up introducing new environments, rationalizing existing ones with rules, and knowing exactly what purpose each of them could give me with regards to how it impacted the flow of my testing.

Lesson Four: Sometimes They Cost a Million 

Getting a new environment wasn't always straightforward, it was usually a few months of making a business case for it and then shopping some rack servers we could hide in our server lab. I remember standing in front of one of these racks, listening to the humming both from it and the air conditioning needed to run that much hardware and being fascinated. Even if it took a few months arguing and a few more months delivering, it was still something that could be done.

But then I started working with Mainframes and a cost of a new environment went from some thousands to a million. It took me two years to get in a new environment while in this environment.

Being aware of the cost (not just hardware but the work to configure), I learned the environments we were working on in even more detail. I would know what data (which day's production copy scrambled) would reside where. I would know which projects would do what testing that would cause the data to change. In a long chain of backend environments, I knew which environments belonged together.

In particular, I knew how the environments were different from the production environment to the extent that I still think of a proud moment in my career when we were taking a multi-million project into production as big bang, and I had scheduled a test to happen in production as the first thing to do, one that we couldn't do elsewhere as the same kind of duplication and network topology wasn't available. And the test succeeded, meaning the application failed. It was one of those big problems and my proudness was centered around the fact that we managed to pinpoint and fix it within the 4 hour maintenance windows because we were prepared for it.

Lesson Five: It Shouldn't be Such a Specialty

Knowing the environments the way I did, I ended up being a go to person for people to check of which of the addresses to use. I felt frustrated that other people - in same kinds of positions that I was holding - did not seem to care enough to figure it out themselves. I was being more successful than others in my testing for knowing exactly what I was testing, and what pieces it consisted of. My tests were more relevant. I had less of "oops, wrong environment, wasn't supposed to work there".

So think about your environments and your attitude towards the environments? Are they in your control or are you in their control?

Tuesday, July 10, 2018

What is A/B Testing About?

Imagine you're building a feature. Your UX folks are doing what they often do, drawing sketches and having them tested on real users. You've locked down three variations: a red button, a rainbow button and a blue button. The UX tests show everyone says they want the red button. They tell you it attracts then, and red generally is most people's favorite color. You ask more people and the message is affirmative: red it is.

If you lived in a company that relies heavily on A/B tests, you would create three variations and make releases available with each variation. A percentage of your users would get the red button, and similarly for the two other colors. You'd have a *reason* why the button is there is the first place. Maybe it is supposed to engage the users to click and a click is ordering. Maybe it is supposed to engage users to click and a click is just showing you're still active within the system. Whatever the purpose, there is one. And with A/B tests, you'd see if your users are actually clicking, and if that clicking is actually driving forward the behaviors you were hoping for.

So with your UX tests, everyone says red, and with your A/B tests, you learn that while they say red, what they indeed do is blue. People say one thing, and do another. And when asked on why it is the way it is, they rationalize. A/B tests exist to an extent because people being asked is an unreliable source.

What fascinates me around A/B tests is the idea that as we are introducing variation, and combinations of variation, we are exploding the space that we have to test before delivering a product for a particular user to use. Sometimes I see people trusting that the features aren't intertwined and being ok with learning otherwise in production, thus messing the A/B tests when one of the variation combinations has significant functional bugs. But more often I see people not wanting to invest in variations unless variations are very simple like the example of color scheme of buttons.

A/B testing could give us so much more info on what of our "theories" of what matter to user really matter. But it needs to be preceded with A/B building of feature variations. I'm still on the fence with understanding how much effort and for what specific purposes organizations should be willing to invest to really hear what the users want.

Sunday, July 8, 2018

The New Tasks for an Engineering Manager

I've now been through three stages of transitioning to an Engineering Manager role.

First stage started off as my team interviewed me and decided they would be ok having me as their manager. People started acting differently (too many jokes!) even though nothing was really different.

Second stage started when my manager marked me as the manager in the personnel systems and I got new rights to see / hear stuff. I learned that while being a tester I never had to do mundane clicking, that was pretty much core my my new managerial responsibilities. Accepting people's hour reports (only if they insist doing them, we have an automated way for it too), people's vacations and expense reports.

Third stage started when I started finding the work undone while the engineering manager position was open. The recruitment process with all its steps. Supporting new people joining the organization. Saying goodbye to people when we agree the person and the work are not right for each other. Rewarding existing people and working towards their fair pay.

I found an Engineering Managers' Slack group, and have been fascinated with the types of things Engineering Managers talk about. A lot of this stuff is still things I was doing while identifying as "individual contributor".

I've found two weird powers I now have been trusted with: terminating someone's contract is just as easy in the systems than accepting hour reports (and there is something really alarming in that). And as a manager, I have access to proposing bonuses without having to do all the legwork I used to do to get people rewarded.

Officially one month into my new role and now one month on vacation. We'll see what the autumn brings.