Tuesday, November 12, 2019

A Feature Was Born

There is a fascinating phenomenon that I am following, one I would call "Overanalyze, Underdeliver". I find it fascinating because I catch myself often wanting to slow things down to think, not trusting the others on their thinking and assuming delivering something could be the end of it.

When delivering truly continuously, there is no beginning, and no end. There are just steps on our journey to build something our users find more valuable, rather than less.

There is an evolving product vision we work against. It wasn't defined by product management, but it is most certainly influenced by them channeling different stakeholders - with their particular kind of filter. It is emerging from discussions with many different parties, including customers we actively seek out.

From this foundation, a single developer can have a great idea of how to make things better.

In last week, I have been following a feature being born and the discussions and actions we take around it.

Awareness of such a feature was born two years ago, and the wishful thinking of it was cut down before it bloomed. It needed people in a particular team to have time for it and they had higher priority work.

In two years, things changes. Not that the particular team would have time, they don't. But we took our internal open source practices to a next level, where we don't only share components on our main programming language, but bravely go polyglot-one-more and with right motivation, can make changes beyond our previous scope.

So it bloomed again. The "we need to think this through" meaning "I can't think this through right now" came about again. But this time instead of spending time on thinking it through in an abstract way, a developer molded the thing in code.

Today came the time to think it through - demoing, testing and improving the feature as a group. Tomorrow is the time to get it in, in a pull request.

A feature was born. It was born in a time where choosing the discussion route, we would still be discussing. What fascinates me most is about how much power there is in breaking off the defaults and reorganizing the flow.

We probably saw the earliest exploratory testing & fixing we have been capable of so far - before ever making that pull request.

An early Christmas for a Testing Dreamer. And just a happy day for a process rebel.

Monday, November 11, 2019

What the Testing We Do Looks Like?

Back in the days before releasing frequently was a thing, testing looked very different. The main difference was that we thought of automation as something that was replacing attended testing, whereas we now see it more as a way of introducing scale of unattended testing we could never have done attended.

The stuff we attend to changed as the world around us shifted. We still do "testing" with just as many hours, but the work is split on working heavily on unit tests, test automation and other quality practices.

I try to explain the change I see with a metaphor: pool is not a bigger bathtub. Both these are containers of water (just like software development is just making and delivering code changes), but what you can do in a pool is very different than what you can do in a bathtub. Imagine a bath guard - kind of hilarious. Or a bath party - very eccentric. We create new things we can do with just a bigger container of water, and bringing down the release cycle does something very similar to the ways we work. The whole conceptual model of what makes sense changes.

In efforts to describe what seems to make sense right now, I work on finding words to describe what testing I do looks like to me. It is not a tool for sense making the whole world of all testing, and I don't need a tool for that - I don't work in the whole world, I don't consult in the whole world. I merely describe my lessons from where I am for the organization that I work for but also in my understanding of how the world comes together.

For me, it all is customer-obsessed  and developer-centric. We all care for where the money comes for our business. And we recognize and appreciate that we have something out there that people are already paying for that we want to always improve and never make worse. But we already have something valuable

Smart Developers Turn Ideas Into Code

Since it is already out there, I can't describe a phase of requirements gathering, but rather it begins with a vision of value. There is always, even if informally, a mostly shared base of understanding of the types of things we are providing for the customers. I recognize vision not from a document someone wrote, but from discussions with multiple members of community bringing perspectives driving us to a similar direction. Similar, not same, as vision is in its most powerful state when it guides individual actions without excessive coordination.

From having something out there working for the customers and vision, we come to a set of ideas on what we could try to make things better. This part of the process of coming up and acting on the ideas, turning them into code, is where the change gets made.

Some ideas are bigger, and they are hard to grasp alone. Smart people are not alone, but surrounded by other smart people. Together, we can make ideas better, and as such, the resulting code better. Or we can discover the ideas in the first place.

Let me share a small example of how I can reasonably expect my days to be from today. 
I had a Windows machine I had kicked up from an image last week, forgetting that the images in that particular set of images give me a fairly old version of windows, one where our .Net dependency for showing a full UI stops me from seeing the UI that I wanted to test. I run a windows update tool on Friday, and since that takes its time, did something different going back to the computer today. As I remote to the computer, I see the IP but I wasn't remembering the name of the computer. There is one very simple way for me to do it: logging into our security management portal and seeing it there. So I did.  
I found the computer, got things I needed but also noticed we had started showing both IPv4 and IPv6 addresses. Having been elsewhere, I had not followed that detail, but just looking at it I said that I didn't like a detail on how they were ordered. 5 minutes later of me just casually mentioning this, I had a pull request to review that fixed the problem. We added a bit of discussion around what I would test around more network interfaces that was hard to simulate, and another pull request on an additional unit test was created. 
My tester-contribution can happen anywhere, anytime, without constraint of a process. It will happen from a foundation of me (the tool) being in the right mind in the right place to connect things. And I cultivate the chances of that lucky accident through discussions but also, hands-on with my external imagination - the product - either directly or through a set of scripts known as test automation (TA). 

On doing my work, I rely on an existing structure that we have built and are enhancing as we learn what we are missing:
  • A Smart Developer creates code to change the application 
  • Unit tests on local machine to 80-90% coverage 
  • Pull Request Review - a minimum of second pair of eyes on change
  • Static tools of various flavors in the pipeline
  • Unit tests in the pipeline
  • TA in the pipeline (with TA telemetry)
  • CI environment application telemetry
  • Change-log driven exploratory testing
  • Changes out in other product line continuous beta
  • Changes out in internal pilot
  • Changes out in early access
  • Synthetic monitoring aka. production TA (or machines, in production, continuously monitored)
  • Production Telemetry, positive events and error events 
The like "TA in the pipeline" is more than a few scripts run, and I will dedicate a post of its own to it later. 

With every change, we try to leave things better than they were before. Caring developers pulling in the help they need, and making the help readily available is what testing looks like to me. Testers are developers. We change things, but we also change the other developers perceptions. 

We see things other people don't. 

Saturday, November 9, 2019

A Career Retrospective

With few decades in the software industry, I have something I would call a career. I would not have seen how my career unfolds in advance, and I could never have described what I do as a path I want to take.

A few heuristics have served me well:
  • Do something you enjoy (and some of it should bring money in)
  • Always be learning. 
  • Have a goal, just to recognize whether what you enjoy takes you there. If not, change the goal. 
For some years, I had a goal of becoming an international keynote speaker and I did a lot of my choices around that goal. I chose jobs that built me a platform of experiences to speak on, doing things hands on with product development rather than consulting. And I became a keynote speaker that wants to quit speaking and replace her contribution of 20-30 talks a year with 20-30 new speakers who start from a platform of mentoring.

My current goal goes under the working name "Maaret to Wikipedia" and if you have ideas on how to do that, I am open to suggestions. Currently it feel funny, self-indulgent and next to impossible to see the route, and it is already making an impact on how I prioritize - within things I enjoy - the things I end up doing. The best thing about this goal is how supported I feel for it at work with my close colleagues and how it helps me see some people who always lift me up when I need it (Marit van Dijk, I'm thinking of you).

A lot of the work I do is invisible.

I help people who are speakers to get their messages out better. Sometimes giving people the transformative insight into what they are speaking on takes me literally a few minutes, and the people seeing their own experience in a different light completely miss the shine I added.  They would not be better off without running into me - usually very intentionally on my part.

At work I facilitate a developer-centric way of working in a way that mixes holding space for good things, injecting good things and leading by doing some of the things. The work I do leaves behind very few pull requests, but many things others do shine a little better because of the stuff I do.

I change jobs fairly often, leaving behind people who, I would hope, know something more because our paths have crossed. The companies interest is not lifting their people up, rather to abstract us under a brand.

I write blogs, articles and all kinds of texts. It changes what some people do. Yet when they do it, doing it is their own success.

I build talks of my experiences, I show up to share, and I discuss with people. Meeting people gives me a lot of energy, new ideas and drive.

I'm not exactly a one trick pony within software - but software all in all is my trick. My interests are manyfold. I speak and write about exploratory testing, test automation, teaching programming, mob & pair programming, agile, management, self-development, conference organizing, speaking, diversity, and any observations around software that I feel like. I'm usually known for things I do on the side, rather than the things I focus on.

What I'm particularly proud of is my ability to re-invent myself and see my belief systems shattered - with my own initiative. Listing things that I believed to be true that aren't so is one of my favorite pastimes.

A Vague Timeline

In recent reflections, I have come to appreciate how large chunks of my work during my career has been left to oblivion as per how things are and personal choices of not sticking with them. They've all given me a platform to observe things from. They also bring out feelings of wishing someone would have taught my younger self some of the things I now know. But I also recognize that my younger self did what she could with the conditions she was under, and every experience I have had has made me the person I am today. Hindsight bias makes us feel like we could have known things, and if there is one thing exploratory testing really enforces in learning, it is that the reality of missing things is the reality and we outcomes are unpredictable. 


  • Describing test automation at work as a baseline for returning to research - on applying AI in testing, and applying testing in AI-based systems
  • Building a self-organized developer-centric team with modern agile practices that have enough structure for the powerless
  • Writing further my books on Mob Programming, Strong-Style Pair Programming and Exploratory Testing
  • Organizing a conference as experimentation platform to change the world of conferences
  • Helping aspiring speakers by finding them mentors with SpeakEasy (or mentoring them myself)
Before, each step going sort of backwards in time in a way that makes sense to me:
  • Becoming an expert in exploratory testing. I've done this all my career, and it is the one thing that has been my continued focus. 
  • Becoming an expert in engineering management. I did not realize I had been learning this in my test manager role before. A few decades of reading every book on the topic to manage up effectively as a tester did help. 
  • Becoming an expert in test automation. Moving it from none to some, and from some to better. Knowing well what better looks like. 
  • Speaking in conferences, meetups and delivering training sessions that total 399 sessions. 
  • Discussing (and improving) conference proposals in 15 minute time-slots over three years with about 500 people and discovering a process I call "Call for Collaboration". 
  • Popularizing "Testers don't break your code, they break your illusions about the code" by speaking about it, elaborating it with samples from my professional life, beyond testing conferences. The guy who said it did not do the work I did around it. Google for evidence and stop assuming my work belongs to him. 
  • Introducing frequent product releases where it was "impossible" as release updates computers in the millions. 
  • Introducing daily product releases where it was "impossible" as there was no test automation. 
  • Organizing 5 years of European Testing Conference to learn how (if) conferences should pay the speakers, to create a true networking conference and to bring together developers and testers on a shared testing agenda.
  • Becoming an expert in pair and mob testing (and programming). 
  • Teaching programming (in Java) to women over 30 and kids with the Intentional method using pair and mob programming as core instruments in teaching.
  • Teaching Software Testing at Aalto University of Applied Sciences / Helsinki University of Technology both as main lecturer but also as visiting industry speaker
  • Doing my first keynote to only be known as the woman the other keynoter spent their keynote bashing "out of respect and surprise how alike we think". 
  • Building and teaching a 22-day on-site Testing training program to enable unemployed career changers into the industry. Delivering a second iteration as independent trainer. 
  • Running Finnish Association for Software Testing for decade and letting it wither away as a man was rewarded and thanked for starting the thing. Starting Software Testing Finland (Ohjelmistotestaus ry) to start over, only to realize that there was no correcting as any communities around the topic in Finland are intertwined in people's minds. 
  • Becoming an expert in complex test environments. If you ever feel like talking about the kinds of environments that cost a million and take minimum of 6 months to deliver, then we have similar experiences. 
  • Becoming an expert in defect management and bug advocacy. Analyzing a large set of defect management tools in order to select one against requirements gathered in a fairly large organization. 
  • Becoming an expert in acceptance testing. I know how to get domain experts clueless on testing just enough structure to excel and not waste effort and impact the quality at start of acceptance testing through contracts and collaboration. I spent some years intensively learning it. 
  • Becoming an expert in test management. Running multi-million projects as test manager, but also running smaller ones. I did this for different companies to get the crux of it.
  • Becoming an expert in software contract quality and testing -related aspects. If you ever want to spend a few hours on discussing how badly contractors can behave and how you recognize loopholes in contracts around this, I'm your person. 
  • Becoming an expert in software processes leading up to agile. When Alistair Cockburn asks who has read his work on Crystal, there were not many others in the room that had. Research gives you chances to read and think deeply about what others are saying. 
  • Becoming an expert in benchmarking with the TPI-model. Analyzing 25 Finnish companies with TPI-model and doing a benchmark on state of testing in Finland. I can still speak on the details because I did the work even if the company kept me in the background. 
  • Doing my first talk on the topic of Extreme Programming in 2001. 
  • Researching (and publishing) on software product development, and (exploratory) testing
  • Becoming an expert in localization testing. I spent years running localization testing projects and doing it myself and learning everything I could read on then and since on how localization testing works. 

Even if I have my "Maaret to wikipedia" project, it serves more as a way of thinking through what there is that I could even do. In the end of the day, I go back to my heuristics: do what you enjoy, and always be learning. Goals move, but appreciation of learning with great people remains. 

Rethinking Test Automation - From Radiators to Telemetry

Introducing Product Telemetry

A week after we started our "No Product Owner" experiment a few years back, the developers now each playing their bit in product owner decided they were no longer comfortable making product decisions on hunches. In now common no hassle way, they made a few pull requests to change how things were, and our product started sending telemetry data on its use.

As so often is, things in the background were a little more complex. There was another product doing the pioneer work on what kind of events to send and sending events, so we could ride on their lessons learned and to a large extent, implementation. The thing I have learned to appreciate most in hindsight is the pioneer work they did on creating us an approach to care for privacy and consent as key design principles. I've come to appreciate it only through other players asking us on how we do it.

The data-driven ways took hold of us, and transformed the ways we built some of the features. It showed us that what our support folks know and what our real customers know can be very far apart, and we as a devops team could change the customer reality without support in the middle.

The concept of Telemetry was a central one. It is a feature of the product that enables us to extend other features so that they send us event information about their use.

At first product telemetry was telling us about positive events. Someone wanted to use our new feature, yay! From the positive, we could also deduct the negative: we created this awesome feature and this week only a handful of people used it, what are we not getting here?  We learned that based on those events, we did not need to ask all questions beforehand, but we could go back exploring the data to learn patterns that confirmed or rejected our ideas.

We soon came to conclusions that events about error scenarios would also tell us a lot, and experimented with building abilities to fix things so that the users wouldn't have to do the work of complaining.

This was all new to us and as such cool, but it is not like we invented this. We just did what the giants did before us, adapting it to ensure it fits to the ideas of how we are working with our customers.

We Could Do This in CI!

As telemetry was a product feature, we tested it as a feature, but did not at first realize that it could have other dimensions. It took us a while to realize that if we collected the same product telemetry from our CI (testing) environment than we did in production, it would not tell us about our customers but it would tell us about our testing.

As we did that, we learned things about the way we test (with automation in particular) that the scale of things creates fascinating coverage patterns. There were events that would never be triggered. There was a profile of events that was very different to that of production. A whole new layer of coverage discussions was available.

This was different use of the same feature we had in the product in test than in production.

The Test Automation Frustration

To test the product we are creating, we have loads of unit tests to do a lot of heavy lifting on giving feedback on mistakes we may make when changing things. As useful as unit tests are, we still need the other kinds of testing, and we bundle this all together in a system we lovingly call TA. As you may imagine, TA is shorthand for Test Automation, but the way I hear it, I rarely hear the long word at work but TA is all around.

"We need to change TA for this."
"We need to add this to TA."
"TA is not blue. Let's look at it."

TA for us is a fairly complex system, and I'm not trying to explain it all today. Just to give some keywords of it: Python3, Nosetest, DVMPS/KVM, Jenkins, and Radiators.

Radiator is something you can expect to see in every team room. The ones we're using were built by some consultants back in the days when this whole thing was new, and I have only recently seen modernized versions someone else built in some of the teams. It's a visual into all of the TA jobs we have and a core part of TA as such.

The Radiator builds a core principle on how we would want to do things. We would want it to be blue. As you see from the image of its state yesterday as I was leaving office, it isn't.

When a box in that view is not blue, you know a Jenkins job is failing. You can click on the job, and check the results. Effectively you read a log telling you what failed.

A lot of times what failed is that some part the TA relies on in its infrastructure was overloaded. "Please work on the infrastructure, or try again later."

A lot of times what failed is that while we test our functionalities, they rely on others. They may be unavailable or broken. Effectively we do acceptance testing of other folks changes in the system context.

Some people love this. I love it with huge reservations, meaning I complain about it. A lot. It frustrates me.

It turns me into either someone who ignores a red, or risking overlapping work. It requires a secretary that communicates for it. It begs people to ignore it unless reminded. It casts a wide net with poor granularity. It creates silent maintenance work where someone is continuously turning it back blue, that hides the problems and does not enable us to fix the system that creates the pain.

I admire the few people we have that  open a box and routinely figure out what the problem was. I just wish it already said the problem.

And as I get to complaining about the few people, I get to complain about the logs. They are not visitor friendly. I don't even want to get started on how hard it is for people to tell me what tests we have for X (I ask to share my pain) or for me to read that code (which I do). And logs reflect the code.

From Radiator to Telemetry

A month ago, I was facilitating a session to figure out how to improve what we have now in TA. My list of gripes is long, but I do recognize that what we do is great, lovely, wonderful and all that. It just can be better.

The TA we have:

  • spawns 14 000 windows virtual machines a day (older number, I am in process of checking a newer one)
  • serves three teams, where my team is just one 
  • tests 550 unique tests for my team for number of windows flavors on pull request
  • tests all the 15 products we are delivering from my team
  • runs 100 000 - 150 000 tests a day for my team
  • finds crashes and automatically analyzes them
  • finds regression bugs in important flows
  • enables us to not die out of boredom repeating same tests over and over again
  • allows us to add new OS support and new products very efficiently

The meeting concluded it was time for us to introduce telemetry to TA - and some of the numbers above on the unique tests and number of runs daily are our first results of that telemetry in action.

Just as with the product, we changed the TA product to include a feature that allows us to send event telemetry. 

We see things like passes and fails now in the context of the large numbers, instead of the latest results within a box on the radiator. 

We see things in multiple radiator boxes combined together into the reason we before needed to verify from the logs. 

We see what tests take long. We see what tests pass and what fail. 

And we have only gotten started.

The historic date of the feature going live was this week Thursday. I'm immensely proud of my colleague Tatu Aalto for driving through code changes to make it possible, and the tweets where he is correcting me on my optimism warning he had a few bugs he already fixed. I'm delighted that my colleague Oleg Fedorov got us to see a solution through seeing things. And I can't wait to see what we make out of it. 

Monday, November 4, 2019

A meeting culture transformation

As I was looking into mob programming some years back, we summarized a common theme of complaints into a little cartoon with people discussing in a meeting room.
Person 1:
My team is interested in trying Mob Programming.
The idea is everyone works together on one computer.
The person at the keyboard is just typing what the whole team tells them to. So everyone is involved, instead of 5 people watching 1 person work.
You rotate quickly, every 5 minutes, to develop cross-functional teams and eliminate knowledge silos.
Ideas get implemented the best way the team can no matter who has them.
Misunderstandings and bugs are minimized.

Person 2:
Sounds like I'd be paying 5 people to do 1 job.
Now let's stop talking such nonsense. I still have a lot of slides to go through.
The word around is that managers hate mob programming. As a manager who wants my team to do mob programming but they refuse, I think we love blaming managers for our own assumptions we did not keep in check.

Up until this morning when I came to office, I was discussing how Mob Programming is different than a meeting. What changed this morning is that a colleague read my latest Mob Programming Guidebook  and pointed out that while we don't really do full-on mob programming, we have managed to transform our meetings into little mob sessions.

It's funny how you need someone else's eyes to see how you're different.

For the last three years here, I have not gone to a single meeting with slides prepared.
I don't go unprepared. But I never ever write an agenda in advance.
When I start a meeting, we build an agenda. It might be that we actively  take time to build it. Or it might be that we build it by parking themes that pop up that are relevant but not about the thing we are trying to sort out right now.
We work the agenda within a timebox either by doing the most important work first, or by doing just enough of it that the rest can happen offline, outside the meeting without others losing context completely.

As my colleague points out: all our meetings are little mob sessions. How about yours?

Sunday, November 3, 2019

Mobbing with an Audience

I've run some hundreds of mob programming and testing sessions with new groups for purposes of conference talks and trainings, and while I prefer setting up a full day session so that I can mob with the whole group of 25 people, sometimes I end up splitting the group for demo purposes. I was writing about this for the new version of Mob Programming Guidebook, and thought it might make useful content just as a blog post. 

Mob programming with an audience is a special setup that is useful tool especially to someone teaching mob programming, teaching any skills in software development in a hands-on style making new kinds of sessions available for conferences, or generally running demo sessions with partial session participant involvement. As a conference speaker and a trainer, a lot of our mob programming experience comes from facilitating mob programming sessions with various groups. For a training, we usually set up the whole group into a mob where everyone rotates. For conference sessions where time constraints limit participant numbers for effective mobbing, we use mobbing with an audience.


For mobbing with an audience, you split the room to two groups:
  • The Mob. For the most effective mob made of complete strangers is small. You want to have a diverse set of mob programmers. These are the people doing the work. 
  • The Audience. The rest of the group sit in rows as audience. The role of the audience is to watch and make observations, and their participation is welcome when doing a retrospective.
For the mob, you will set up a basic mob setup in the front of the room with chairs for each person, whiteboard furthest away from the computer to ensure speaking volume for the designated navigator through the physical setup.

For this setup, you will need a room with chairs that are freely moving. Make sure text on the screen is big enough not only for the mob to see, but the audience to follow as well.


As we have run some hundreds of sessions with various groups in this format, we have had things go wrong in many ways.

Things you can do in advance to ensure less problems
  • If the room is big, ask for a microphone for both the driver and designated navigator. It is essential that people in the room can hear their dialog. While there are no decisions allowed on the driver seat, speaking back to the navigators pointing out things you see and they don’t is often necessary. 
  • If you have only one microphone, give that to the designated navigator. Even in smaller rooms, the microphone can work as a talking stick the designated navigator passes around for other navigators and can help create an atmosphere where everyone in the mob gets to contribute. 
  • Make sure the text on the screen is visible from the back row. Avoid dark theme, it does not serve you well for live coding and testing in front of an audience. 
  • When selecting the diverse mob, what you need to do for this depends on who you are. If you are a white man facilitator and want women, start with inviting women or facilitate mob member selection in a way that gives you a diverse set of mob programmers. As a white woman, women volunteer for me in ways they don’t for the men and I need to work and I need to work on other aspects of diversity. 
  • For a demo mob, you may want to demo a group with experience working on the problem and even together. If that is your aim, invite the people you want for the mob in advance. 
  • A new mob with different experiences highlights many powerful lessons around collaboration and people helping each other and your goal to set up a fluent demo is probably infrequent. The new programmers exclaiming “they now know how to do TDD” as equal contributors is a powerful teaching tool. 
Things you can do while mobbing to improve the experience
  • Encourage people in the audience who want to be navigating from the audience to join the mob. To be more exact, demand that or holding their perspective that can be very disruptive. 
  • If you want to introduce who is in the mob, you can do that on first round of rotation. If you want deeper introduction, you can have a different question to tell about themselves on each round of rotation. 
  • When people rotate, ask them to tell what they continue on. It helps to enforce the yes and -rule and is sometimes necessary when nervous participants have been building their private plan waiting for the hot seat. 
  • When group is stuck, ask questions. “Does it compile?”, “What should you do next?”, “Did you run the tests?”, “What are your options now?”. Your goal is not to do things for them but get them to see what they could be doing. 
  • When group is stuck in not knowing how to do a thing, say “Let me step in to navigate” and model how to do a thing for short timeframe. Expect the group to do that themselves the next time. 
Things you can do in retrospective to save up a messy session
  • Facilitate a retrospective towards discussions around reasons we could learn from for lack of progress
  • Introduce theories or ideas of how you could try doing things different the next time. 
  • Find your own style of facilitating groups of strangers. Having seen multiple people facilitate, there are style differences where one person’s approach would feel off on another. Strong-handed “supporting progress” and light-handed “enabling discovery” will result in sessions that are different. 

Saturday, November 2, 2019

Never Stop Learning

I have a full time work that I enjoy, and I very carefully review my own satisfaction to the impacts going on at work. I require myself a balance of being productive and generative. Not one to the other, but a balance of these two.

I'm being productive when I:

  • strategize testing and communicate strategies so that we are better aware of problems I will be looking for
  • test (possibly documenting as test automation) to add to coverage of what might work but particularly identify things that did not
  • have the gazillion discussions leading to over time to a process improvement or someone else's raise
  • when I fix problems, be in it the program or in the way people interact
I'm being generative when I: 
  • teach others how they do better testing when I am not around to do it
  • lead people into insights that make then do things in a way that is more productive
  • bring in ideas that inspire me and through me, us overall
The way I control my work weeks is that I try to be mindful doing things that are directly for my employer the 40 hours a week, and then have 'hobbies' that resemble work but are fully my choice, my control - even though these activities benefit my employer too. 

Realistically, I cannot split work and fun. Work is fun. So I manage my own expectations of what I do, and try being mindful of the work-life balance when the lines are blurred by my own choices.

Doing stuff that resembles work and could be work 140% is a better framing. On top of that there's family, friends and stuff that does not resemble work. Writing a blog post on a Saturday resembles work. 

I do this because my interest are divided. While I love the impact we are building for at work that I have defines as my purpose (while there, for now), I also love making a dent in the world outside helping new speakers get started, building my own talks, writing articles beyond what can fit in my work day frame. 

In theory, I could be giving more for the purpose at work. The 100% time I give them could arguably be more awake, more focused if I wasn't doing all the other things. But thinking this way would be shortsighted because the 40% time gives me learnings that change who I am and what I can do, both in providing motivation and actual skills. 

Having discussed this with a colleague with similar yet different profile, I'm taking a learning from it: 

It's not the hours and their efficiency today, it's the continuous growth on our ability to deliver. 

It's the math of never stopping learning. 

Wednesday, October 30, 2019

Assert and Approvals, and Why that Matters

As an exploratory tester, a core to my existence are unknown unknowns. I stumble upon problems - to an extent people like Marit van Dijk call out that "I don't find bugs, the bugs find me". But stumbling upon them is no accident. I intentionally find my way to places where bugs could be at. And when there is a bug, I try to finetune my ability to recognize it - a concept we testers call oracles.

As I'm automating, I codify oracles. In some ways of automating tests, the oracles are multipurpose (like property-based testing, describing and codifying rules that should hold true over generated samples), and sometimes they are very specific.

Both these multipurpose partial oracles and single purpose specific partial oracles are usually things we build as asserts. In the do-verify layers of creating a test automation script, asserts belong in the verify part. It's how we tell the computer what to verify - blocklisting behaviors that cannot be different. Much of our automation is founded on the idea of it alerting us when a rule we create does not  hold true. Some rules are fit to run unattended (which is why we focus on granularity) while others are for attended testing like exploratory unit testing.

Another approach to the same codifying oracles problem comes through approval testing. What if we approached the problem with the idea that a tester (doing whatever magic they do in their heads), would recognize right-enough when they see it, and approve it. That is where approvals come in. It is still in the verify-layer of creating a test automation script but the process and lifecycle is essentially different. It alerts us when things change, giving names to rules through naming tests without a pre-assumed rule of comparing to a golden master.

Approvals in automation increase the chance of serendipity, a lucky accident of recognizing unknown unknowns when programming, and they speak to the core of my exploratory tester being as such.

The Difference in the Process and Lifecycle

When we create the tests in the first place, creating an assert and approval is essentially different:
  • An assert is added to codify the pieces we want to verify and thus we carefully design what we will tell us that this worked or didn't. Coming up with that design is part of creating the assert and running the assert (see it fail for simulated errors) is part of creating it.
  • An approval is prepared by creating a way to turn an object or aspect of an object into file representation, usually text. The text file would be named with the name of the text, and thus whatever the textual representation of what we are creating is the focus of our design. We look at the textual representation and say "this looks good, I approve", saving it for future comparison. 
  • Assert you write and run to see green. Approval you write and run to see red, then you approve to see green. 
When we run the tests and they pass, you see no difference: you see green.

When we run the tests and they fail for a bug we introduced, there is again an essential difference:
  • An assert tells us exactly what comparison failed in a format we are used to seeing within our IDE. If run on headless mode, the logs tell what the failed assert was. 
  • An approval tells us that it failed and shows the context of failure e.g. opening a diff tool automatically when running within our IDE. Especially on the unit level tests, you would want to run the tests in IDE and fix the cause of failure in IDE, having it all at your fingertips. 
When we run the tests and they fail for a change we introduced, we have one more essential difference:
  • An assert needs to be rewritten to match the new expectation. 
  • An approval needs to be reapproved to match the new expectation.
When looking for things we did not know to look for, we are again different:
  • An assert alerts us to the specific thing we are codifying
  • An approval forces us to view a representation of an object, opening us to chances of seeing things we did not know we were seeking.
Back to exploratory and why this distinction matters so much to me

Even as a programmer, I am first and foremost an exploratory tester. My belief system is built around the idea that I will not know the mistakes I will make but I might recognize them when I see them.

I will create automation that I use to explore, even unit tests. Sometimes these tests are throwaway tests that I never want to push into the codebase. Sometimes these tests belong to a category of me fishing for new problems e.g. around reliability and I want them running regularly, failing sometimes. I will keep my eye on the failures and improve the code they test based on it. Sometimes these tests are intended to run unattended and just help everyone get granular feedback when introducing problems accidentally.

With approvals, I see representations of objects (even if I may have to force objects into files creating appropriate toStrings). I see more than I specifically command to show. Looking at a REST API response with approvals gives me EVERYTHING from header and message and then I can EXCLUDE undeterministic change. Creating an assert makes me choose first and moves exploration to the time I am making my choices.

The difference these create matters to my thinking. It might matter to your thinking too.

Saturday, October 26, 2019

A Man in Tech Doxed Me for a Copyright Dispute

"I know where you live" is not anyone's wish for a message to receive. Worse yet, "You've been sloppy with sharing your private information" was not true. The private information that lead to the contact was not shared by me but by someone I started writing a book with and never finished before we went our separate ways.

This is how I learned that Llewellyn Falco - a leading figure in the mob programming community - decided it was ok to publish my home address online without my knowledge or consent, leading people to my home by the mere choice of the place he published on. 

This is what the online abuse community knows as doxing, even if this was a mild version of what could still happen. The person contacting me was a privacy enthusiast with no malicious intent making a point that I had been not careful, all the while I had been made vulnerable by someone with awareness that this was not my choice and would never be my choice. 

As a result of this, I took the natural action:
  • Requesting Llewellyn Falco to work to take down the information he had illegally posted. He has not responded. I would think both apology and action to correct would be in order. 
  • Requesting Llewellyn Falco's lawyer who had posted the info on his behalf to take down the information as registering private information of a European Union resident would not be GDPR-appropriate. 
  • Requesting the register holder Llewellyn Falco used to publish my home address to take down the information. 
  • Protecting myself by isolating myself from people silently supporting Llewellyn Falco leaving communities we were both part of and blocking our mutual followers on Twitter. 
Out of communities we were both part of, I have taken steps back to Women in Agile who were quick to respond to the choice between the two of us and a small private community of programmers. I felt lucky to have Women in Testing and Makeupconf communities that would never have welcomed him in the first place and I could feel safe confiding in for advice and support. 

If you are no longer seeing me in twitter, you may be one of the 150 people that I blocked. I will unblock with a message that you have made a choice of not following him. I will regularly rerun the blocking script for people who are following him. 

Why Would He Do Such a Thing?

While I feel hurt and offended, I can bring myself to imagine that doxing me was not his intent but it was his impact. Just like with so many other cases we witness around bad actors, they don't mean to hurt when they do. The reason he does this is a copyright dispute. 

Normally people would resolve copyright issues in a court of law and I have welcomed him to invite me there. He chose to take a path of harassment. 

Back in the days, we started writing a book together under the idea of open source. We published early versions on LeanPub. The dispute, as I understand it, comes from our difference in understanding what open source means in the context of us creating a book together for a while and separating as authors before the book is finished. 

I take the perspective that we co-created text that is available openly for both of us to use forward. My name is not available for him to use forward when we part ways. 
He takes the perspective that last published copy on LeanPub ends the project and blocks both of us from continuing on the unfinished work. 

LeanPub, as a platform says nothing of the copyright. So I agree we don't agree and might need to resolve it. That is where court could be helpful should he feel he needs to change status quo. 

I deduct his intent from his actions:

  • April 3rd he disputed contents on LeanPub using Github to claim text we had pair written (physically, I had written) on his computer was written by him and that I would think adding to text I had already written was removing his copyright. In fact I recognized his contribution inside the book but continued using my text to finish an unfinished book on LeanPub placeholder that was always mine but I had invited him as a contributor. He needed to ask to take it down because the project was mine and he was a second author while working with me. The book cover has my name first, the LeanPub page was held by me - it was my project and he was a contributor for a while.

  • April 4th LeanPub made final call blocking the LeanPub book and all books I could write with Mob Programming on the title.  Deducting intent, I call out intent to harm me. Removing me from my marketing platform in attempts to alienate me from the mob programming community for his personal gain. 
  • On April 6th I rebooted my mob programming guidebook completely, writing every single word of ideas I had gone through writing before again. This is why my book is available with https://mobprogrammingguidebook.xyz. It is not the same book, it is my fork of the book we never finished, with significant effort used to remove any of the stuff where I disagreed with what I wrote in the first place as compromise to two person co-creation effort. The book is still under construction. 
  • On April 7th I learned that an unauthorized private copy of book we had previously distributed on LeanPub was on Github posted by Llewellyn. I hold copyright as the first author, my name is on the cover. I requested to take it down as copyright owners we had only agreed LeanPub as publication place and he had effectively removed that from both of us through a GitHub issue to the project he published it in. He never responded.
  • On April 30th, his lawyer registered a US copyright for both of us for book that misrepresents the book he hosts on github presenting him as the first author. I was not made aware of the copyright registration at all. The book was written in Finland and I am the first author. He knew to protect his personal information while disclosing mine. 
  • On October 23rd a privacy advocate uses the information from the copyright registration where he publishes my home address to contact me on my sloppiness on handling my personal data. 
The Personal

All of the above is very much business as usual, and even business. However, in my call for action, I want to add the personal. 

Yes, we used to date. 

It did not end pretty.

I had to ask Set Enterprises Inc under GDPR with a threat of fine to remove nudes and sexy stories I had given him while in relationship. He wanted to withhold those. I won and I'm eternally grateful to EU legislation and the mediators in the community a year ago failing to influence him then. 

There was no way our professional collaboration could continue. 

My Call for Action

If you read all of this, I have a request for you and you may choose to dismiss it. I had not asked to renounce him with any of the previous actions, but doxing my home address knowing what that means to a social justice oriented woman in tech, he crossed every single one of my boundaries. 

I would ask you to choose me to be part of your communities but that means that:
  • If you follow him, I block you. We can't co-exist.
  • If he is welcome as "ally" in your women in ... groups, I will not be welcome as a woman in your women in... group. 
  • If he is welcome in an online slack/group, I will not be. I leave quietly, 
  • If he is welcome in your conference as speaker, I will not be welcome in the same conference. I will leave as soon as I recognize he is there and facilitate your convenience through creating a speaker rider that states we cannot co-exist.
I wish you would kick him out as he encouraged kicking out "simpleprogrammer" a week ago, deplatforming him but I feel I am not in the place to call for that action. I wish the people would choose to include me by excluding him, but I recognize that since his bad behavior is more private of nature, he can keep his fake ally status and alienate me from the mob programming community I have also in my own right contributed. 

Communities are nothing more than connections of people, and it is my own choice to step to the side for safety. 

This is how women in tech leave. Thanking the people who are there for them, fading into invisible.

Edit 30.10.2019. Let me make this clear: I have done already my part of running blocking scripts and scheduling reruns of it. The only thing to do is change how things are by asking for unblock, conditioned on not following him. This is my equivalent of stepping out of the space he is in.

Friday, October 25, 2019

Our Three Ways to Beta

Working in a product company, the term "beta testing" we pass around in testing certifications actually has meaning and relevance to us. But while you can find a vocabulary definition to it, the real life with what it means is much more interesting and versatile. So let's look at three products and their three ways to do beta.  Each product version when fully taken to use means installation to computers counted in millions so set your eyes away from that one server and think about building something where distributing and installing is a major effort split between the user ("acceptance testing") and the product ("all testing") organizations.

Product A

Product A is targeted for big organizations and whether it is true or not, the day to day is filled with change avoidance, change control and unwillingness to take new versions into use. Product A version the customers have today is good enough as in not causing trouble, and there is no pull to move forward. Every move forward is work that includes acceptance testing of a new version of product A in whatever specifics each customer has around that ensuring standards around disallowing untested (by the customer organization separately) software to sneak into production.

Because releases are work and hard and to be avoided, the product A works with a very traditional release cycle. You can expect to see an annual cycle of planning to be set around the idea that there are two major releases and 2 minor releases. The customers recognize that a version with x.00 is always the major one and as it introduces new major features and changes, you should avoid it employing a conservative strategy. The minor releases, x.01 customers recognize for having what x had, but fixes from the first brave customers, with history showing that the conservative strategy is worth the wait.

With only a few releases a year, each release introduces thousands of changes even if the list of visible new things is only a few items per release. The risk accrued on not releasing is a realistic threat that the customers of the product experience because as hard and well as the product A team tests, the real operational environment is always more versatile and use cases more surprising that a test lab can simulate.

When product A does beta, it is a schedule milestone and warning of uncertain / low quality for customers, a release on top of the four releases. When you add two the two major versions a beta each, you have total of 6 releases a year! And if anyone takes a look at the beta version finding out it does not work in customer environments (again, product team tests hard in test lab before!) the features available for that release could already be somewhat improved for by time of the major release being "RTM" (release to manufacturing). Time between beta and RTM is not for bug fixing though, it is for the second batch of features that never see a beta. Sometimes when the stars align, testing and fixing work happens during this beta instead of running with the next batch of features.

The beta exists to enable a user organization to start their acceptance testing early but no one would expect it to end up in wide production use. That's what the major and minor versions are for.

Product B

Product B is targeted for big organizations too, but the big organizations serve a more individualistic user base. Major and minor releases work very similarly, with annual planning cycle seeing two of each but their size on effort allocated is different. They are not projects where one follows the other, but usually a new major release starts as previous goes out, and a minor release happens on the side. A significant difference comes with size of minor releases, that product B minimizes. Most of fixes are seen within major releases and going out only in the next major release, and doing a minor release is an effort to minimize the scope whereas product A sees it more as a maximizing the scope to get anyone to move to a new release.

The customer sentiment around what a major and minor release means is very much the same as with product A but there is slow yet significant pull to get the major releases out for real users to use. There is some avoiding of change as it is still a project, but it is considered a little less arduous.  And then there is some of exact symmetry of how product A customers would behave, but that is in the minority.  There's rules in place on how many versions are supported, which supports the slow pull.

When product B does beta, it's a pulse release assessing quality continuously, every two weeks. Whatever can be used internally, can be used externally for a selected group of thousands of users. Beta is more of a way of paying for product by telling if it fails, and it very rarely fails in ways users see. Meanwhile it is possible to see ways of failing users don't see through telemetry.

When a release is made available, every single feature it includes has been out in beta. People have been asked specifically about those changes. Some things have been in beta for 6 months, others two weeks. A RTM schedule depends on key features (major risks) having enough time in beta to assess feedback and RTM quality is solid enough so that major releases are regularly considered for use. 

Product C

Product C is targeted for medium sized organizations but lives in a service space where continuous changing of software is already more acceptable practice for customers. Since it is a service, moving from a version to another is in theory invisible to the user organizations, and they've been stripped from possibility to control it. New stuff comes with a cadence that does not show up even as a need to reboot. There are no major and minor releases, just releases every 2-4 weeks.

When product C does beta, it's a time to cancel wider distribution. It's not called beta really, but "early access" (external customers) and "pilot" (internal customers). The time for beta is one week before full availability, and as with product B, things run more on telemetry and seeing failure than hearing back from a real user seeing problems.

Do the differences matter? 

The differences cause some confusion as one product's beta is not another product's beta, and the rules around how to schedule are very essentially different. As a tester, I would never want to go back to product A style of working and from my experiences in continuous delivery as transformative practice, both B and C have a lot of similarities.

It's now been 12 years since I moved product B to the pulse style continuous beta. I've played a central role in defining product C taking the practices further, ensuring the best software we have created is running as a service on the customers machines.

I work from the idea that in the security business of defence, we need to move fast because the offense definitely is. The end users may not understand all the details of what we defend them against, but working with those details day in and day out, I know in my heart the speed matters. Even more in the future than today.

Thursday, October 24, 2019

Stop Assigning Me Jira Tasks

People are social. I'm particularly social and need real human connection at work. Assigning me a Jira ticket describing a task that you think is doing fails my needs of connections on so many levels that I needed to stop working and start writing.

The Communication

I appreciate every colleague that drops in (even virtually) and talks to me like people talk to one another. They tell about a problem they have, an aspiration they have, a wish they have in hopes of influencing me to do something for them or rather with them to make the world a better place. I could not be happier.

Surely there are too many things. There is uncertainty. But as we connect, we establish what other things I may have on my mind. We may agree that while they were hoping that they could just dump the work on me, I may be busy elsewhere and this stuff is too important to wait for me, and the person trying to do the dumping could even do it themselves. Sometimes we end up doing it together enabling the idea that I don't need to be the only one doing things like this. All through the connection.

This human to human communication seeking mutual benefits instead of assigning tasks is a source of happiness. The opposite is a source of unhappiness. And Jira tickets where someone is hoping to dump the work on me will never do the same thing as that person talking to me, caring about my response.

"What's the status with ABC-123?" is not how people talk to people they like, value and appreciate.

The Decomposition

When we have a goal to achieve, different people can achieve the goal differently and it does not mean that one way is ultimately the best one. Usually the best ways to achieve goals are ones that teach us the most and help us stay honest about where we really are with the end result. At least personally I'm not happy that we built the perfect "smart inventory" as we imagined it, if it does not serve purposes we had in mind for creating it and I don't see a change in behaviors with use of products with such a thing. I recognize however that other people see success already with accomplishing a task, not assessing whether the outcome is that we are in a better place for it.

When I am given a task in Jira that someone else wrote, I find I treat it as decomposition of a goal. I first spend time recomposing the tasks to the goal, and then re-decompose to ensure I can also feel that the original decomposition is high enough quality that I find my sense of purpose in doing the work assigned to me. This is a lot of energy used.

But it is not just the energy drain that is a problem. It is also the fact that more often than not, I find there are tasks in the negative space and we would end up delivering with quality below standards I can find myself comfortable with. I could do what you asked, but I wouldn't do what you wished and intended.  So instead, why tell me the task, tell me what you wanted in the first place before you decomposed it for me as your decomposition isn't helping me.

If you decompose for me because you think I cannot, how will I learn when you always chew it for me? You may think I should be grateful for the work you do for me to help me but really: nobody's job should be to think for others but to grow others to think. Be there to support, give me feedback but let me do the work that makes me grow.

The Sense of Ownership

As a tester, I have grown to think that I look at things with an end in mind while people working with requirements often look at things with the beginning in mind. We both look all the way, but our emphasis in a different place, and because of it we see things differently.

A value item is not done for me until I have tested it in multiple ways, as per risk unique to each item. There is no recipe I follow for every single one, but there are patterns and heuristics that help me make those decisions. I look at the features we do in production, not only up to production and I learn of what I could improve on my work months, even years after first doing the work harvesting patterns that would prove me I am wrong now - like a true tester, seeking to falsify hypothesis is the way to get closer to being right about things.

I have grown a sense of ownership. I don't seek to avoid blame "I did what ticket ABC-123 said and it did not say to do X" but to learn from mistakes we made. FAIL is a First Attempt in Learning, and we are in it together. Someone else's plan that was wrong was my mistake on not decomposing the plan to find out how it was wrong and I may learn to make my choices differently or accept that the choices I made under conditions I was are choices I would do again, regardless of the result being less than perfect. Who seeks perfection anyway? 

For me a ticket in Jira, closed without doing anything after six months, is an indication I created waste not value in writing that ticket. I don't believe there is inherent value in remembering all the ideas we have had that we did not act on. And looking at the tickets closing, the evidence still suggests that I need to look more of rejecting the hypothesis.

People, as per Pink's book Drive, are creatures looking for Autonomy, Mastery and Purpose. Sense of ownership is how I frame those in my work.

Friday, October 4, 2019

Job Crafting and Why Specialty Titles are Still Relevant

I work with a cozy 11 person DevOps team. I say DevOps, because we are running the development and operations of a fairly reasonable sized (in the millions) user base for a particular windows application product. We do ok, and I really like working on this, with these specific people.

These specific people are all individuals, 6 developers and 5 testers. That is at least the high level categorization. Yesterday was a particularly interesting day and I tweeted about it. 

Watching the ratio, thinking it is telling about the work we do in the categories of "dev" and "test" makes little sense. But watching the ratio as how many people hold the space for quality ("what do we really mean by done") and how many people hold the space for smart solutions ("how do we change things for the better") makes a lot more sense.

The testers implement features. The developers implement tests and explore. And this all is very natural. Everyone pitches in within what they can do now, stretching their abilities a little.

I think of the two main roles we hire for as centers from which we operate. When named a tester, you naturally navigate towards caring for quality, feedback and systems to give that feedback. Your home perspective is one of skepticism, wanting to believe that things could be broken. When named a developer, you naturally navigate towards adding functionality and internal quality to enable future work, and delivering changes to users to change their experience with the product. Your home perspective is one of believing in possibilities, seeing how things could work. When there is space for both perspectives in collaboration, better software tends to emerge.

I have been the solo tester with 15 developers around me, and holding space for quality feels different there than it does here. Here I am not alone. And a lot of times I find the best quality advocates are those I would block as developers.

I still call them testers and developers, because they still are testers and developers. But they are not binary. The testers are a little bit more testers than developers. The developers are a little bit more developers than testers. 

Seeking for both helps in hiring. It helps in creating a sense of purpose these people fulfill within the team while allowing the stretch. In the end of the day we need both perspectives and having people who feel ‘home’ in different roles help keep the perspectives strong.

There is no good word to move both into in a way that doesn’t send the message we give up on testers. There are people who want to be testers. Being a tester is great. Yet when seeking this one word, our gravitation is towards making us all developers.

I'm a big believer in job crafting - the idea that no matter what job you hold, you do little (or bigger) things to make it the job you want. The job that looks and feels like you. If you were hired for a purpose, crafting into a place where you forget what you came to do isn't what we seek. Understanding your purpose and value you are hired to deliver is important. But letting that stop you from growing, doing things smart or different would not be right. 

So if a tester developers features, they can still be a tester. If they don't test anymore, they should probably not be testers. Promise of a service that isn't there is just dishonest.

Friday, September 6, 2019

More Practice for the Feedback Muscle

Where I work, we moved this year to quarterly personnel reviews.

With two rounds behind me as engineering manager of a team of ten (max 15 on some rounds), I sometimes feel like I barely finish one before another one is already starting.

The way one round works is very similar to what we used to do "process supported" once a year. Automation generates a set of questions about your achievements, your learning and your goals for future. They are sent to the employee who fills them in. Manager can generate more forms all around the organization inviting anonymous feedback to collect info. And then the employee and the manager discuss that stuff together, planning forward for the next interval.

With 10+ people and 4 times a year, that is a lot of forms. And with all the colleagues I work with, that is even more forms on feedback their managers are inviting us to provide.

At first, I was thinking of the forms as a way of documenting achievements for posterity. After all, I facilitate a very productive team that not only does stuff, but actually provides value for end users. Everyone contributes in their own way, on ways I look at as unique and supportive to others. I'm a manager trying to escape management (clock is ticking, max 9 months to go...), so it only feels fair that the record I leave for future would help the future manager understand my reports successes.

I only needed one annual and one quarterly review to realize that the process needs to be played with. And when I say play, I mean more than "talk every day and make notes quarterly". It needs some serious play.

With my team, I announced we are doing it this time in pairs. This would work so that everyone again fills their own form and it gets sent to me. Then I assign everyone a pair, who is peer in the team. The pair will have the responsibility to fill in the bits of the manager, providing their colleague feedback. I will act as secretary and quality control person helping fill in gaps in relevant feedback.

Our quarterly review was feedback and feedback on feedback.

I learned that:

  • Everyone being the others manager, even if just as role-play was great
  • Everyone has relevant feedback and ideas to grow for their peers
  • In a pair + manager, both positive and negative feedback was discussed constructively
  • People generated ideas of what to try to do differently
  • I could add my pieces and views to the discussion that was much richer this way
Whenever some manager asks me feedback on their reports anonymously through the automation system, I always send an email to the person giving my feedback without anonymity. I believe anonymity only brings out the worst in people. It weakens the gift of feedback. It removes the possibility of a dialog and co-generation of ideas to improve things. It allows for resentment to build, and creates an atmosphere where you need to be guessing which one of your colleagues is unhappy with you in case on negative feedback. 

When I do this, I hear that it is culturally not possible elsewhere to do what I do - in the same organization. 

I hear people don't have the soft skills of giving feedback.

I hear people only talk about positive and don't speak of the negative. 

I hear people have no baseline of what really good looks like. 

The way I look at it, these are true for lack of practice. You need to build the feedback muscle. And just like with real muscles, those grow with repetition, practice and corrective feedback.

Giving feedback - radical candor - is relevant. If you hide your problems because they are hard to talk about, how do you expect to get good? If you don't share what delights you, how do you expect to get more of it? 

Monday, September 2, 2019

Breaking the Assumption of Review to Accept

It was one of the European Testing Conference calls, and I forgot to ask at the end of the call if they'd trust me to summarize the call in a tweet. I remembered I forgot only an hour later, and they were no longer around. I send a message asking for the trust, and the response taught me something of relevance I had had hard time communication before. The response asked me to run the message through them.

I felt deflated. I no longer wanted to write that tweet. I felt they did not trust me. I felt they wanted control over my tweets.

I sat on the response for some hours, thinking I would let it pass without tweeting, without saying anything. But eventually I responded and expressed how I felt.

"I do really bad with reviews (for acceptance), they suffocate my ability to do things. I do "you can ask me to delete" style of reviews." - I told them. 

"It's a tweet. Go ahead" and "I'm sure you'd look out for me" was just the response I needed.

They did not ask me to delete my tweet. I would have if they asked me to. The risk of me doing something irreversible was very low. There was no particular reason why the review needed to happen before the material was published as it could happen after. I would carry the risk of apologizing in public, explaining in public and reaching out to people with the odd chance that this time I failed at doing something I did routinely.

The tweet was something that made a pattern of how I prefer working very visible.

I build skills and competencies in me & people around me to do things without acceptance.
I expect things are discussed in preparation, not reviewed as final step.
I trust the doer to pull help when help is needed.

I was writing release notes for millions of users, all by myself. As we added another product, the product people wanted a review before publishing. I asked them to publish their own release notes.What we were already publishing routinely had all the info they needed. They just needed to add the review and republish. Waiting for that step did not make sense to me.

I welcome feedback on mistakes, to grow the skill. I kick out the testing-fashioned acceptance review, unless I see it is founded on actual risk.

I don't let people do this to me, and I don't do this to people. I've given up on being the guardian of quality and become the facilitator of quality. The work happens before and while, not after. And there's always the next cycle to act on feedback on mistakes.

Saturday, August 31, 2019

Women are cut out for the highest tech salaries

I spent yesterday with 500 people where many were programmers. Granted, many were beginning programmers, but they were programmers none the less.

You become a programmer when you start programming. You become a professional programmer when someone is ready to pay for your programming. It's that simple. Even "full time professional programmers" do other things than write code for most of their days.

Every day is a change of learning more.
And yet, here I am *again* using time away from studying and learning more, like pretty much all my life. And the reason for it is my gender.

The event yesterday was #MimmitKoodaa, a Finnish initiative to bring women from other industries to programming. The 500 people were women. They were there because Finnish companies have started taking action in providing targeted free hands-on trainings specifically to teach programming to this demographic. Smarts are not divided based on gender and with software being the thing that defines our future, we won't be leaving our future for men only but want the best minds from all genders (including the ones not in the binary) to work on this stuff. Also, tech pays. And it pays well. Women are cut out for being paid well and work to learn to be worth all that money.

I'm writing this because I made the mistake of browsing through the #MimmitKoodaa hashtag on social media. I read comments of someone I know telling how women are just not cut out for programming and proof of that is that women in the industry are more often testers than programmers.

Having to use energy to walk away or address that shit is the reason why women still avoid programming.

When all your pull requests are specially analyzed for lack of aptitude, rather than assuming you're learning.

When all your programming assignments in school are met with "who did you smile to so that they wrote the code for you" by your peers (teachers knew better).

When you can't have a 1:1 meeting with your colleague without others making fun of you having something going on because one of you is woman and the other isn't.

When you speak in a meeting about architecture choices and the facilitator takes you to side after telling that "you're intimidating, you need to let the others do the talking" even though you really did not speak any differently than others.

When organizing meetings and other glue work is implicitly assigned to you, because everyone knows you care enough to do and they can get away with it by waiting.

When suggesting mob programming, your colleague tells you that you could motivate them by showing your breasts.

These are just few examples from what I go through. My list is a lot longer, but keeping lists drains energy from doing other stuff. Many women have lists like this. Many women choose not to share their lists to save their energy, leaving foolish people thinking there was no problem in the first place. But it allows them focus. That is why they are further in programming. And we have lots of examples of superb women programmers.

Fuck off telling women are not cut out to take big salaries. We are. We have always been. But we shouldn't have to take the extra shit for getting what you're earning now. For this amount of shit, you should pay us more. And women need the extra help in getting started because we've used our time on fighting that you guys got to use on playing to learn. That is why #mimmitkoodaa is a great thing.

Asking for a private discussion

Two years ago, I had trouble with a colleague. I was a tester, they were a developer, and I was unhappy with how seemingly carelessly they would push in changes, and leave no room for others to catch up and test any of that stuff.

Like often happens, I did not tell them. I tried making them change the way, arguing over the time, but I never went to the person and told them in their face how their actions made me feel.

Instead, I told my manager.

We work in this office with team rooms, so whatever I would say to anyone would always be a thing I would say to 10 people. Unless I asked them to step out into a private room.

My manager listened, and told me to talk to the person. Two weeks later, he asked if I had taken care of my problem. I said the problem was gone. But I never talked to the person. It felt too difficult.

In an open space, even the "can we talk - in private" is something everyone hears. I see people applying tons of ways around saying those words out loud - sending a message, putting a small meeting in calendar. And yet, when two people get up the same time, people notice.

It is hard when it is not in the culture.

A year ago, I became a manager. I had no other choice but to talk to people in private. And I found my way of doing it. I don't do scheduled 1:1's because that is people's choice. But I make sure we talk stuff in public that needs talking, and I always ask for that private discussion. It's in the role, it happens.

Yet, I still remember the first time in happened in the new role. I asked another manager, who happens to not be a woman as I am the only woman, to speak with me on a difficult situation I was facing as a new manager. The jokes on two genders in one small room were overwhelming, and while I walked away, I was toying between punching someone and never asking again.

I believe offices are better if private discussions are normal and natural. They often are that for the more senior members of staff, who easily go pick up someone relevant and talk over a cup of coffee. To get there, here's my thoughts on creating a place that encourages them:

  • Tell people that this is expected, especially the  new and junior people
  • Give them tips on how to ask for a private discussion (calendar seems to be the normalized way around where I work)
  • Do something with the overall atmosphere where noticing who talks is relevant by talking generally more openly
  • Hold the jokes of men and women working 1:1 - they are  harmful beyond your immediate understanding

Thursday, August 29, 2019

Bag of Candy - A Conference Talk Design

Having listened to conference talks in scale, and discussed potential conference talks in bigger scale, I've come to explain people that the most impactful conference talks are ones that make you do something.

They could be giving you an idea with motivation so powerful that you remember their version of it when the time of implementing that is right. At least you'd go back to office mildly pushing for a change, until you again accept that the organization isn't moving anywhere on your ask.

They could be giving you guidance of how to do something well, with quality. And when you return to office, you will do things differently just because you now know how to do it better.

For creating a program for a conference, the hard part is that people don't often suggest either of these kinds. They suggest all kinds of other talks:

  • "I read a book and now I want to talk about it" 
  • "We build test automation for 3 years and I want to talk about it"
  • "I lived a great life with many turns and I want to talk about it"
The problem with the first it is not directly rooted in experiences of doing the thing. Thinking the thing isn't doing the thing.

The problem with the two others are that they are "life stories", experiences that are framed through fast forwarding a life rather than starting with conclusions from that life. I think of these talks as "bag of candy". 

A Bag of Candy -talk is one where the main character of the story is the main message. It's like a bag of candy with all kinds of goodies: some hard ones, some soft ones, some black licorice ones (the best kind!), some fruity ones and even some sour ones. We all love different kinds of things, so there's a little bit of something for everyone. Everything in the talk is an invitation to start a discussion, but it leaves very little learning to the listener. You did not learn how to properly enjoy the black licorice, you just heard some way it is awesome. 

What if we would frame our talks around that one kind of candy, and illustrate the greatness of that with our experiences and stories. What if the audience left with a compelling idea of releasing daily, scaling tests like the speaker does, or being mindful in day to day job to do better instead of more. Just mentioning this idea does not stick. To make ideas stick, we need to walk our listeners to those ideas with us, illustrating from multiple directions. 

I choose one candy talks over bag of candy talks. Which ones do you prefer? 

Tuesday, August 20, 2019

Pull, don't push

What if you could start with the end in mind? You could be aware of all the good things you have now, imagine something better and focus on finding a step to that direction. This way of thinking, a step by step, pulling value out is what drives the way I think around software development.

Starting with the end in mind and pulling work to get all the way to the users, it is evident that nothing changes for the users unless we deliver a change. All the plans are pushing ideas forward, pushing does not have the same power as pulling. A concrete example of how something could be different is a powerful driver to making it different.

I'm thinking of pull scheduling today, and I reviewed yet-another-product-realization-process draft that seems to miss the mark of idea of power and importance of pull.

Pull helps us focus on doing just the work now that we need in delivering that piece of value.
Pull makes us focus on learning on what is worthwhile so that we don't get pulled on random things.
Pull enables collaboration so that we together make work flow.
Pull centers the smart thinking individuals who pull what they need to create the value upstream is defining.

When we know we need an improved user interface, pull helps us realize that we should get the pieces together for delivery, not for a plan.

Plans push things through. Planning is always there when work is driven by pull, plan is the continuously improving output.

Who is pulling your work? 

Friday, August 9, 2019

From Individual Contributors to Collaborative Learners

Look at any career ladder model out there, and you see some form of two tracks that run deep in our industry: the individual contributors and the managers.

Managers are the people who amplify or enable other people. Individual contributors are the one who do the work of creating.

The ideas of needing a manager run deep in our rhetorics. Someone needs to be responsible - like we all weren't. Someone needs to lead - like we all didn't. Someone needs to decide - like we all were not cut out for it. And my biggest pet peeve of all: Someone needs to ensure career growth - like our own careers were not things we own and work on. Like we needed a specially assigned role for that, instead of realizing that we learn well peer to peer as long as kindness and empathy are in place.

For years, I was a tester not a manager. And this was important to me. And in my role as a feedback fairy, I came to realize that as an individual contributor, there was always a balance of two forms of value I would generate.
With some of my actions, I was productive. I was performing tasks, that contributed directly to getting the work done. With some of my actions, I was generative. I was doing things that ended up making other people more productive.

One of my favorite ways of contributing became holding space for testing to happen. Just a look at me, and some of my developer colleagues transformed into great testers. I loved testing (still do) and radiated the idea that spending time on testing was a worthwhile way of using one's time.

As an individual contributor, I learned that:

  • My career was too valuable to be left on the whims of a random manager
  • Managing up was necessary as an individual contributor so that random managers would be of help, not of hindrance
  • Seeking inspiration from peers and sharing that inspiration helped us all grow further
  • The manager was often the person least in position to enable me to learn

In most perspectives, it became irrelevant who was an individual contributor and who was a manager. The worst organizations were the ones that made an effort to keep those two separate by denying me work I needed to make the impact I was after as a tester because that work belonged to manager.

Any of the impactful senior individual contributors were more of connected contributors - working with other folks to create systems that were too big for one person alone.

As I grow in career age, I realize that the nature of software creation is not a series of tasks of execution but a sequence of learning. Learning isn't passed in handoffs, with a specialist doing their bit telling others to take it from there. Learning is something each and every one of us chipped away a layer at a time, and it takes time for things to sink in to an actionable level. Instead of individual contributors, we're collaborative learners.