Tuesday, August 20, 2019

Pull, don't push

What if you could start with the end in mind? You could be aware of all the good things you have now, imagine something better and focus on finding a step to that direction. This way of thinking, a step by step, pulling value out is what drives the way I think around software development.

Starting with the end in mind and pulling work to get all the way to the users, it is evident that nothing changes for the users unless we deliver a change. All the plans are pushing ideas forward, pushing does not have the same power as pulling. A concrete example of how something could be different is a powerful driver to making it different.

I'm thinking of pull scheduling today, and I reviewed yet-another-product-realization-process draft that seems to miss the mark of idea of power and importance of pull.

Pull helps us focus on doing just the work now that we need in delivering that piece of value.
Pull makes us focus on learning on what is worthwhile so that we don't get pulled on random things.
Pull enables collaboration so that we together make work flow.
Pull centers the smart thinking individuals who pull what they need to create the value upstream is defining.

When we know we need an improved user interface, pull helps us realize that we should get the pieces together for delivery, not for a plan.

Plans push things through. Planning is always there when work is driven by pull, plan is the continuously improving output.

Who is pulling your work? 

Friday, August 9, 2019

From Individual Contributors to Collaborative Learners

Look at any career ladder model out there, and you see some form of two tracks that run deep in our industry: the individual contributors and the managers.

Managers are the people who amplify or enable other people. Individual contributors are the one who do the work of creating.

The ideas of needing a manager run deep in our rhetorics. Someone needs to be responsible - like we all weren't. Someone needs to lead - like we all didn't. Someone needs to decide - like we all were not cut out for it. And my biggest pet peeve of all: Someone needs to ensure career growth - like our own careers were not things we own and work on. Like we needed a specially assigned role for that, instead of realizing that we learn well peer to peer as long as kindness and empathy are in place.

For years, I was a tester not a manager. And this was important to me. And in my role as a feedback fairy, I came to realize that as an individual contributor, there was always a balance of two forms of value I would generate.
With some of my actions, I was productive. I was performing tasks, that contributed directly to getting the work done. With some of my actions, I was generative. I was doing things that ended up making other people more productive.

One of my favorite ways of contributing became holding space for testing to happen. Just a look at me, and some of my developer colleagues transformed into great testers. I loved testing (still do) and radiated the idea that spending time on testing was a worthwhile way of using one's time.

As an individual contributor, I learned that:

  • My career was too valuable to be left on the whims of a random manager
  • Managing up was necessary as an individual contributor so that random managers would be of help, not of hindrance
  • Seeking inspiration from peers and sharing that inspiration helped us all grow further
  • The manager was often the person least in position to enable me to learn

In most perspectives, it became irrelevant who was an individual contributor and who was a manager. The worst organizations were the ones that made an effort to keep those two separate by denying me work I needed to make the impact I was after as a tester because that work belonged to manager.


Any of the impactful senior individual contributors were more of connected contributors - working with other folks to create systems that were too big for one person alone.

As I grow in career age, I realize that the nature of software creation is not a series of tasks of execution but a sequence of learning. Learning isn't passed in handoffs, with a specialist doing their bit telling others to take it from there. Learning is something each and every one of us chipped away a layer at a time, and it takes time for things to sink in to an actionable level. Instead of individual contributors, we're collaborative learners.


Tuesday, August 6, 2019

When the No Product Owner Breaks

Two years ago we did a 3-month experiment for "No Product Owner". We listed all the work we expected a product owner to be doing, recognizing a particular idea around prioritizing work and knowing about customers on a level where their priorities would somehow better reflect the needs of the business. And we agreed that instead of that old-single-wringable-neck-of-responsibility, we would not have anyone in that role, but we would do all the work we recognized on that list as a team.

The 3-month experiment turned into how things roll around here. And these have been the happiest, most productive two years of my professional career. Even with the fact that I've now also become the reluctant manager trying to understand why managers were ever needed and how that could become untrue again.

The first impacts of No Product Owner were evident and easy to see:
  • Developer motivation through ownership.  Feeling you don't need permission to do what you know is right did wonders to what we got done.
  • Emergence of monitoring. We refused to rely on feedback from those upset or active, and wanted to see the feedback in scale, even when the users were not pushing through all the obstacles to get the message to us.
  • Owning up to all things maintenance. We opened our ears to hear what they said about us. We weren't picky on the per-process channel, but focused on hearing and understanding, and dealing with it as soon as we could.
  • Frequent software delivery. Making a release was our choice. We paid attention to the costs of it. We released more frequently, collecting less stuff on the shelf.
  • Unfiltered developer - end-user and 3rd party communication. Solving hard to solve problems in debugging calls, getting other 3rd party devs to understand the right details, and not needing to remove the damage filtering can do on understanding end user needs.
  • Implementing beyond own "box". We'd use the internal open source model to implement things truly end to end so that "done" became something that was in the hands of our real end users.
  • End-user documentation turned part of product. We started writing end user documentation, targeting our release notes as useful communication and working with marketing to highlight themes that had been bubbling over a longer time. 
  • Discovering features never requested (measured useful). We would listen to problems not just as bugs to fix but as opportunities of designing better features.
Many people suggested we just used to have a bad product owner. This is not the case. Many people suggested that I just became the product owner. This is not the case either. What I did, however, is help people see when they might be dropping a ball. And I took some balls and dropped them, just so that other people could pick them up. What I keep doing is analyzing how we do and what I am learning as we are doing it.

At two years, I'm starting to look at the places where the No Product Owner approach is breaking for us. I currently model this as three levels of signals, where some we are able to work with and others need something we don't naturally have.
  • Loud signals. Hearing what people say is not that difficult. It even works that they say it to any of us, and any of us deliver the message further in the team. If someone has a clear idea of a feature and they ask for it, this comes through loud and clear. If someone has a problem with out product and they figure out that it indeed is a problem and contact us, it comes through loud and clear.
  • Medium signals. These are signals that already require some amplifying for them to turn into action. Like a badly written bug report from end users. Or need to create a monitoring view that shows a trend. We've improved in dealing with this a lot over the two years, and it looks to be an area that will grow.
  • Low signals. This requires so much work, that we may need to  bring back a product owner role just to work on this. It is easy to miss relevant things that we should be acting on, because work required to make the information actionable requires significant amounts of work. Be it the "this feature isn't really very good" and making that into ideas of how it isn't good and what we could do to make it better, or seeing and innovating technologies that would change how we see the playing field. 
For the low signals requiring work, this work was missing for us already while we were in the with-PO mode. There was so much other work for the product owner that they were running with their analysis almost as much (or a little differently, but just as much) as the rest of us.  We need an analyst role that works (and coaches the rest of us to work on) on low signals and amplifying the ones of those that are important to actionable. Preferably so that they would understand incremental delivery.

It just might be that that is what we'll ask the product owner to be, or that is what I find my role growing into. Time will tell.

Monday, August 5, 2019

When Four Should Be One

There's a piece of wisdom that runs by the name "Conways Law". Conways Law states that organizations (or rather, communication structures in those organizations) will constrain software architectures so that your organizational structures turn into your software architecture.

For the moments when I was thinking we were somehow less impacted, with our relative success of having an internal open source model, I realized that while the architecture may not follow the manager-assigned organization, it still is very much impacted by communication flows and power structures that exist.

Our system test automation structures, however, don't fight the Conway's Law at all. Someone drew four separate organizational units, and at worst, I can go find four slightly different duplicates of similar tests. This post is my pamphlet to going into a war against these forces, when I really just probably should be making the organization pull a inverse conway maneuver and changing how the teams are structured.

I work in a situation where I have four, but I should have one. I believe that this would change by first changing the way we visualize the four, now as one. It would immediately follow on getting the three and the one most separated together. And then the hard work of introducing a new model of understanding what is an application test, what is a product test in a way that is not product team specific.



Friday, August 2, 2019

The Non-Verbal Feedback

I'm a tester (whenever it suits me) and as testers, we specialize in feedback. Good feedback is true and timely, and while we think of feedback as the things people turn into words, a lot of times it ends up being non-verbal.

We look at an application for the purposes of testing, waiting for it to "whisper" where it has problems. The application is my external imagination and all I need is to give it a chance to tell me the very quiet messages that I can then amplify. I need to be ready to listen to multiple narratives:
  • does it do what we intend and reasonably expect it to do?
  • does it have adverse side effects to anything we care about? 
  • does it cause someone problems, either now or when trying to keep it running in production?
We amplify feedback, keep it true, and increase its power making it actionable. If we are timely, a moment later no one remembers the problem.

Similarly as people give feedback on quality of applications, sometimes we need to venture to an even harder side, giving feedback on the quality of the people producing our applications. It is easier to look at an artifact and describe its features we appreciate and don't than to do the same for our colleagues.

When it comes time for saying something about our colleagues, I find that we discover a human feature of conflict-avoidance. Leaving things unsaid is a choice many  of us make. Making choices of when to say and when not is something every one of us needs to learn.

As peers, we are asked to give feedback, but in case of a potential conflict, as peers we can more easily step away, say that continuing as if it isn't *that big a problem*. What I'm learning is that this is one of the many reasons organizations task managers to deal with small problems before they grow big.

Telling someone they should shape up their game isn't easy. It makes me feel awful, especially when it turns out to be a piece of feedback that may be hard for the receiving party. But all I do is turn something they're not paying attention to, something bubbling under, into words with the hope that through seeing it, you could do something.

When your pull requests get very little feedback, you could think of it as absence of errors, or what it truly is, absence of feedback. Why aren't you getting feedback?

When your colleagues don't complain about your work, that could be absence of errors, or absence of feedback?

What can you do to get to the absent feedback? And don't tell me "get a manager who turns it visible for you, as a service" and start servicing yourself. Feedback is the lifeline to improvement and absence of it is a road to stagnation.

The idea of turning implicit to explicit, invisible to visible and abstract to concrete could well be my tester guideline of what to do, I grow uncomfortable in providing this as a service while manager. It's something everyone needs to learn to do for themselves, and for their peers. 



Wednesday, July 31, 2019

Power of Hindsight and Critical Mindset

Disclaimer: this story is roughly based on true events but this didn't happen in this format. All pieces on their own are real, but as a flow they are purely a depiction of my imagination. 

I was chipping away work, just like we do. A lot of the work was pull requests, code changes. Because, without changing something in what we deliver, nothing changes for the people who matter: our users.

Adding the thing was giving me pain. Not the physical kind of pain, but I could recognize the heavy feeling on the back of my head, it wasn't just going to go easy. And since I wanted my pull request to be small and done, and it was working just not pretty the way I like, I made a pull request: "a temporary solution" I labeled it.

I went home, relaxed, didn't think about it. Back at office, I got help from lovely colleagues, hit my head against a few more walls but at least this time I was not schedule pressed nor alone, and eventually I was again doing a pull request.

And again. And again. And again.

The flow of making small changes made me lose sight of all the things I had done. I had received feedback, but I didn't remember any of it. Did any of it change me for the future work I did? I really didn't know.

Like a person with a question, I wanted to answer my question. Jumping into the tooling, I started of a Jenkins job to summarize all the changes I had done in 2019.

Instinctively, I first counted the lines and jumped to judgement. If the number was big, I primed myself in seeing that perhaps I just did everything twice and did not remember. If the number was small, I wondered where my life went when this was the track I left behind.

Similarly, I eyed on the title lines. That already responded to my concerns on doing everything twice, because while each pull request had moved on its own though the systems into the masses of forgotten details, the titles represented what I had wanted to say about the thing as one liner back when I still had it in my fresh memory.

With numbers and titles, ideas of patterns started to evolve. But I wasn't done. There was a third level of remembering I needed, which was the comments I was getting as a timeline. Was I always reminded of the same thing, like leaving the salami pizza box on the living room floor, only to be kindly reminded it wasn't it place over and over again? If I created taxonomy of my feedback in hindsight, what would that teach me?

In the moment, if I would stop analyzing my actions with my most critical mindset, I would be paralyzed, too afraid to do things. But doing the same thing in hindsight, in cadence, as if I was looking at someone else (and sometimes, looking at someone else to have a comparison point), is invaluable.

Own your own learning. Never become the person the team always tells of the pizza box. Even if they created a linter for that purpose. Think, learn, and try something different. Fail because it is a First Attempt In Learning.

Don't wait for your manager to do this, know your own signature first.


Monday, July 29, 2019

Getting to Know Great People aka Call for Collaboration

Year 5 of organizing European Testing Conference has just started. We took notes of possible locations and decided to head to Amsterdam. And with the choice of location, figuring out the program comes next.

What I really want to do on conference talk selection is to invite people to speak. Save them the energy of preparing a submission that may end up rejected. Make them feel like they are recognized, noticed and thus invited. I could do that with many people. As soon as I find them, I could invite them.

But if I did that, what about all the awesome people I have not paid attention to yet, that I may not had a chance of meeting? They may not go to other conferences (where I find people), and they make not be particularly active on social media (where I find people) and they may not work for the considered-cool companies (where I find people).

To balance my troubles, since year 3 of European Testing Conference, we have done a Call for Collaboration instead of a call for proposals. And I'm learning how to best run it.

With a call for collaboration, we ask people to make their existence known balancing our ability to make good decisions on contents and using their time. To do so, we have asked potential speakers to do a Skype call of 15 minutes with us, to together discuss what their topic is and how that would fit our idea of the conference.

Here's the math I work against. 200 people submit. Each use 4 hours to prepare an abstract. That is 800 hours of abstract preparation. We choose 5%. 760 hours of other people's work is wasted, or hopefully thinking either a useful learning experience or reusable for other conferences. I rather use 50 hours, where individually wasted time goes down by 3,75 hours for every single submitter. I have to use 50 hours instead of 10. But my 40 hours are not of more value than their 760 hours. I may run a conference but I am their peer another tester, another developer, another manager.

There are people who find the idea of a Skype call a blocker. This year, we introduced an optional pre-screen route to just tell us the talk idea (not prepare the full abstract and description) in writing so that if the topic seems like the right fit, we could reach out to have our discussion in format they find comfortable.

Some people are terrified of the idea of being rejected on a face to face call, and we surely can never work enough to make people understand that it is not the call but the program fit. We select very small percentage of talks we are considering because we are running a 3 track interactive conference with a limited amount of talk slots.

The way we want to approach these calls is that it is a discussion of peers in testing to geek out on topics. We know everyone is worth a stage, and we need to try to build something that fits our vision for our stages. We try to find great stories, good illustrations, practical experience that highlights the work that is different enough from what we end up choosing otherwise. Trying to guess what might make it without the collaboration is hard.

Every year, we've had talks that were not submitted but ended up being discovered. They are often specific techniques that originally were part of an agile transformation story, a sidetrack where they deserved the full focus.

I hope you trust us to have one of these discussions with you. We seek a mix of testing as testers, developers, designers and managers know it. There is no reason we couldn't discover your experiences to highlight a perspective, and you identifying that idea of what you tick is our starting point for the conversation.

Join us and schedule your own session for discussion.  

Thursday, July 25, 2019

Telling what testers do in simple terms

As I was browsing facebook, I read a comment from a friend on testing stating there are three things all testers need to learn: automating, exploring and telling what we're doing in simple terms.

I find that automating and exploring are activities within the exploration when you know how to write code and can make thoughtful decisions on documenting with code or using throwaway code to extend your reach. Yet both these two things are so wide and varied that you can spend a lifetime learning how to get really good in them, and listing microskills within them would probably be more helpful. I know both of the two already, but I don't know all of them. How could I better show what I do and don't know?

The really fascinating part though was the third key thing my friend called for: telling what we're doing as testers. Explaining our worth. Explaining what we've done, what we'll do and why it takes more than 10 minutes. People's ideas of how testing happens are often so shallow that testing != testing.

Talking about what is going on in testing isn't simple and straightforward. And talking about status isn't a skill we all have equally developed.

I was explaining this on a ride today. If you imagine testing is like painting wall, you can expect that the work depends on the circumstances of doing the work. A breaking brush will make your progress slower, and you may not know all things that happen as you are just getting started. There can be nooks that require more effort. You could stop at any time, leaving an artistic impression. You could approach the painting in many ways. But if you leave a corner undone, it would at least be good if you can tell the others a heads up, rather sooner than later that you'll be running out of time and don't see yourself getting there. If you notice a part of the surface being harder to work on, make others aware of the surprises and allow them to pitch in and help with some of the parts.

Describing something invisible is much harder. Yet we see the same troubles over and over again on talking about what we're doing. We need to be getting better at explaining what we're doing in simple terms. And at minimum, we need to stop assuming people don't want to hear anything but a binary done vs. not done. 

Monday, February 18, 2019

European Testing Conference SpeedMeet - How To?

Picture a conference you went to, alone. You don't know anyone, not sure if they want to talk about exploratory testing (your favorite) or test automation (not your favorite) and not feeling like you have the energy to go and push yourself on random strangers. You show up, sit in a table, watching people around you discuss and listen until it is again time to head to a session.

As a socially anxious extrovert, I have had huge problems with conferences. I want to talk to people,  but the need of taking the first step and finding out if they want to talk to me drains me. My usual recipe is to be a speaker, and have people approach me. But the same issue drove me to figure out other designs for my conference, and SpeedMeet was born.

SpeedMeet puts together three insights:
  • Pairing people up with a rule to introduce is an effective way of building relationships. The rule helped people at Scan Agile meet, and we wanted to do more of sessions where social interaction wasn't emergent but facilitated.
  • The meeting needs an artifact that introduced pull over push in introductions. This piece we found in Jurgen Appelo's talk in Agile Serbia, and combining it my personal aversion to talking about beer (push information often provided in the tester community), the connection to the right dynamic was evident. 
  • The high-volume high-interaction event needs an escape route and permission. This piece became evident with experimenting with large crowds listening to feedback. 
So how does this work?
  1. Mindmap
    Create a mindmap about stuff you would want to say if you got to introduce yourself. Usual triggers we use to help people figure this out is mapping three 2nd level nodes with titles "Personal", "Work", "In the last year" or "I want to learn" and "I want to teach". This map needs to be available for you when joining the session.
  2. Shared space - standing up or loose rows of chairs
    Join the space for speed meet, and find a pair. We would have two rows of people, either standing up or sitting down. The first one across you is your first person you will get to know.
  3. Opening the session
    Before people start talking (and volume goes up), explain what we are doing. Explain we switch pairs every 5 minutes. Have people practice everyone moving one to the right and getting to a new pair. Explain that you are not allowed to say a thing without the other asking you based on looking at stuff they want to talk on in their mindmap.
  4. Timing and rotation
    You may want to practice this. Everyone moves to their right. This means you get to talk to every second person in the line. We find 5 minutes is a good time for mutual discussion starter and is enough to know if this person is someone you want to continue with over lunch and the rest of the conference. It still gives you chances of finding the right person(s) for you by meeting 7-8 people in 45 minute session.
  5. Pull information from the map
    Don't use the map to introduce yourself. Physically hand the artifact over the the other one who can read your mindmap and ask questions they want to know of. This avoids having awkward discussions on topics that are not mutually shared and extroverted people doing a lecture on their person. You can control what information you pull. If you see "2 kids" and don't want to talk of kids, don't. Select something you are comfortable having a conversation about. Take turns on each others map pulling piece by piece.
  6. Leave whenever
    If you feel overwhelmed and want to step out, it is easily possible at time of switching partners. Just step out, and relax. You are in control. 
For European Testing Conference, we have done this activity now for three years. This activity opens  the sharing and networking nature of the conference and sets the bubbly discussion tone. You will meet people here. Everyone is with everyone. And if you know a little about someone, continuing from there is a lot easier. The activity does some of that heavy lifting for you - just play by the rules, and manage your own energy level by stepping out if your interaction limit is reached.