Showing posts with label Agile. Show all posts
Showing posts with label Agile. Show all posts

Thursday, November 29, 2018

Forced Consistency Across Teams

The first thing I taught to our latest 15-year-old trainee was what we believe that rules and processes need to be built with a core principle in mind: trust.

If someone might commit a crime, we don't put everyone in jail.

Many of the corporate processes and principles are an illustration of people really not getting this idea.

We make people clock in and out of work so that we know they worked. Except in this industry, time at place of work is a ridiculous metric. I should know, I've just had two weeks of motivational issues where I was at work and performed really bad (to my standards).

We make people write test cases and tick the box as they complete them, because, quoting an old manager of mine "no one in their right mind would test if they were not monitored in this detail" Well, I did. And still do. And I love it. The devs loved it as soon as it wasn't tick-the-box.

We introduce common practices and processes for teams with very different skillsets and background, even if there was no common problem those solve.

When I look at processes and practices, I note that I have personal preferences. I prefer no estimates, no Jira, end-to-end visibility and sense of ownership, understanding and solving problems. I recognize that everyone in my team has their own personal preferences, and I respect their preferences as I expect them to respect mine. I do compromises, like spend my time suffering with Jira just because they still haven't figured out that it isn't making them better. And they experiment with whatever ideas we all interject into the efforts of trying to make things better.

What inspired me to write this is a discussion about my personal dislike for definition of done.

I believe definition of done is a great tool for building a common understanding for a team on what they try to mean when they say done. I've used it, multiple times.

I've come to think of it as "definition of done for now"-

I've learned that a deeper version of it is risk-based definition of done for now, even within one team. Cookie cutter templates rarely work for other than getting started.

I've experienced over and over again how forcing definition of done over many teams for reasons of consistency is short-sighted. First you have to understand if the teams are consistent, or if some are steps ahead of others, and approaching the same problem with a different solution could actually improve things more.

As with any practices and processes, I don't accept that it would be our only option for improvement. Using time on one thing is time away from something else - the idea of opportunity cost. If Definition of Done would help us make sense in a messy multi team setting, would any other approaches work? Could you redesign the team compositions to force the architecture you aspire that would drive down the dependencies, leveraging Conway's law? Could you instead of a Definition of Done (there are plenty of examples what this contains), describe your team's responsibilities in some other format that would enable you to see a dimension DoD misses?

Using Jira states the different way is hardly the reason why developers find it hard to start working on a new component. Looking at the code and its structures is a much more likely reason. Lack of documentation and training is a much more likely reason.

Go for consistency when it solves a problem without introducing bigger ones. Putting everyone in jail because one might rob the bank tends to be a bigger problem.



The Broken Phone Effect

I was totally upset with a colleague of mine, and to ease my heart, I ranted about them to my manager. Like that colleague just reminded me today, it is so hard to understand people like me who would just talk about problems without expecting a solution. And this rant was like that. I wasn't expecting an action. I just needed to talk.

Like so many times, I found that the metadata of what type of request was coming in from me did not go through. When my communication headers included the metadata asking for a sympathetic ear, and a mirror to bounce things off, they only received the default: When presented a problem, solution is in order.

The solution however was particularly good this time. They suggested that I'd go and talk to my colleague. I did. It felt overwhelming and difficult. But it was a start of many great conversations where we built trust on one another, now knowing that we could constructively talk about anything and everything.

So I distilled my lesson. When I had something I felt strong enough to rant to another person, perhaps taking the extra step and talking to them directly, not about what *they do* but about what *I feel*, was a path worth taking.

With this lesson in mind, I've asked many people to talk to me directly when I'm involved in how *they feel*. I believe they cannot tell me what to do, but they can help me understand how what I do makes them feel and I may choose to work on my behaviors, or at least help them understand how I make sense of the world. Remembering how hard those steps are, I appreciate that many people choose avoidance. I still choose avoidance in righteous anger when I feel neither my status nor the past agreements justify me taking the first step. The word "emotional labor" comes to mind. Bridging in disagreements requires that, and I'm tired of it being expected it is my duty to perform it for the others.

Over the years as I have been blogging, people have reached out to tell they've been through what I write about. While I write for myself, I also write for people like myself. People who aspire to change themselves, change the results they contribute, change the world in some small way. My stories are not factual representations of events, but they are personal intertwining of many experiences that allow me to shine light to a relevant experience.

When my manager calls me telling I should not say "Google me", I wish the person I offended would have had the guts to talk about this without the broken phone effect. I could have explained that I mean that you will find articles I've written, research I've done and that I did google your background enough to see that you are talking to someone who knows something of this stuff. Assume good intent. I rarely say things to insult.

If you have something to say, talk to me about it. If you don't but changed your ways for the better anyway, I'm fine with you being annoyed with me and avoiding me. Mutual loss, but we both have options.

A great option is to break the broken phone effect and just deal with your own stuff instead of sending a messenger. It might have a second order positive impact.

I tried, I failed, I succeeded. I learned. Can you say the same? FAIL is a first attempt in learning and takes a significant amount of courage.

Saturday, November 10, 2018

Changing the Discussion around Scope

People have an amazing talent for seeking blame. Blame in themselves, what they did wrong but also blame in others, what they did wrong. Having a truly blameless retrospective where we'd honestly believe that F.A.I.L. means first attempt in learning and embracing more attempts in the future, hopefully different ones is a culture that takes a lot of effort.

I've personally chosen a strategy to work around scope that is heavily reliant on incremental delivery. Instead of asking how long it takes to deliver something, I guide people into asking if we could do something smaller first. It has lead my team into doing weekly, even daily releases where each release delivers some value without taking out that was already there. Always turning the discussion towards value in production, and smallest possible increment of that value has been helpful. It enables movement within the team. It enables reprioritization. And it enables the fact that no one needs to escalate things to find a faster route to get the same thing done, the faster route is always a default.

We work a lot with the idea of being customer-oriented - even obsessed, if that wasn't such a negative word. We are thinking a lot in terms of value, empathy and caring, and seeking ways to care more directly. We don't have a product owner but a group of smart minds both inside the team but also outside supporting the team with business intel. The work we all do is supposed to turn into value at customers hands. Production first helps us prioritize. Value in production, value to production.

We didn't always deliver this way or work this way. We built the way we work in this team in the last 2.5 years I've had the pleasure of enjoying the company of my brilliant team.

Looking at things from this perspective, I find there is a message I keep on repeating:

If you have a product owner (or product management organization) and they ask you to deliver a feature that customers are asking, they don't know everything but they do their best in understanding what that would be like. They define the scope in terms of value with the customer.

If they ask you to estimate how much work there is to do that, you need to have some idea of the scope. Odds are, your idea of the scope isn't same as theirs, and theirs is incomplete to begin with. The bigger the thing asked, the more the work unfolds as we are doing it.

They asked you for value 10. You thought it will take you effort 10. That is already two ways of defining the scope.

In delivery, you need to understand what the value expected really is. Often it is more in terms of effort that you first guessed.

Telling folks stuff like "you did not say the buttons needed to be rounded, like all the other buttons" or "the functionality is there, but the users just won't find it" may be that it works as specified but not as really expected. I find that those trying to specify and pass the spec do worse than those trying to learn, collaborate and deliver incrementally.

Scoping is a relationship, not something that is given to me. We discover features and value in collaboration, and delivering incrementally helps keep the discussion concrete. Understanding grows at every step of the way, and we should appreciate that.

** note: "Scope does not creep, understanding grows" is an insight I have learned from Jeff Patton. There are many things I know where I picked them up, while there's more where I can no longer pinpoint where the great way of describing my belief systems came from. I'm smiling wryly at the idea of mentioning the source every time I say this in office - we're counting hundreds. 

Getting best ideas to win

There's a phrase I keep repeating to myself:
Best ideas win when you care about work over credit. 
A lot of times, if you care to be attributed for the work you are doing, the strategies of getting the best ideas out there, implemented, are evading. If you don't mind other people taking credit for your ideas (and work), you make a lot more progress.

Mob programming is a positive way of caring about work over credit. There we are all mutually credited for what comes out. But on the other hand, it is hard that you know something would not be what it is without you, and the likelihood of anyone recognizing your contribution in particular is low.

At TestBash Australia, we had a hallway conversation about holding on to credit you deserve, and I shared a strategy I personally resolve into when I feel my credit is unfairly assigned elsewhere: extensive positivity of the results, owning the results back through marketing them. People remember who told them the good news.

As a manager in my team, I've now tried going out of my comfort zone on sharing praise in public. With two attempts at it, I am frustrated on feeling corrected. I'm very deliberate on what I choose to say, who I acknowledge and when. I pay a lot of attention to the dynamics of the teams, and see the people who are not seen, generally speaking. What I choose to say is intentional, but also what I choose to not say is intentional.

This time, I chose not to acknowledge great work of an individual developer when getting a component out was very clearly team work. I remember a meeting I invited together 5 weeks ago to guide scope of the release to smaller, with success of "that is ready, tomorrow". I remember facilitating the dedicated tester designing scope of testing to share that there were weeks worth of testing after that "ready". I remember how nothing worked while "ready", and the great work from the tester in identifying what needed attention, and the strong-headedness of not accepting bad explanations for real experiences. I remember another developer from the side guiding the first developer into creating analytics that would help us continue testing in production. I remember dragging 3rd parties into the discussion, and facilitating things for better understanding amongst many many stakeholders. It took a village, and the village had fun doing it. I would not thank one for the work of the village.

Just a few hours later, I was feeling joy as one of the things I did acknowledge specifically was unfolding into wider knowledge in a discussion. I had tried getting a particular type of test from where it belonged, and failed, and made space for it to be created in my team. The test developer did a brilliant job implementing it and deserved the praise. Simultaneously, I felt the twitch of lack of my praise on finding the way in an organization that was fighting back on doing the right thing, and refusing feedback.

I can, in the background, remember to pat myself on the back, and acknowledge that great things happen because I facilitate uncomfortable discussions and practical steps forward. Testing is a great way of doing that. But all too often, it is also a great way of keeping yourself in the shadows, assigning praise where it wouldn't be without you.

Assigning credit is hard. We need to learn to appreciate the whole village.

Saturday, October 13, 2018

Finding the work that needs doing in a multi-team testing

There's a pattern forming in front of my eyes that I've been trying to see clearly and understand for a good decade. This is a pattern of figuring out how to be a generalist while test specialist in an agile team working on a bigger system, meaning multiple teams work on the same code base. What is it that the team, with you coaching, leading and helping them as test specialist is actually responsible for testing?

The way work is organized around me is that I work with a lovely team of 12 people. Officially, we are two teams but we put ourselves all together to have flexibility to organize for smaller groups around features or goals as we see fit. If there is anything defining where we tend to draw our box that we are not limited by, it is drawn around what we call clients. These clients are C++ components and anything and everything needed to support those clients development.

This is a not a box my lovely 12 occupies alone. There's plenty of others. The clients include a group of service components that have since age of time been updated more like hourly and while I know some people working on those service components, there's just too many of them. And the other components I find us working on, it is not like we'd be the only ones working on them. There's two clear other client product groups in the organization and we happily share code bases with them while making distinct yet similar products out of them. And to not make it too simple, each of the distinct products comprise a system with another product that is obviously different for all three of us, and that system is the system our customers identify our product with.

So we have:
  • service components
  • components
  • applications
  • features
  • client products
  • system products
When I come in as tester, I come in to be caring for the system products from client products perspective. That means that to find some of the problems I am seeking, I will need to use something my team isn't developing to find problems that are in the things my team is developing. And as I find something, it really does no longer matter who will end up fixing it.

We also work with a principle of internal open source project. Anyone - including me - in the organization can go do a pull request to any of the codebases. Obviously there's many of them, and they are in a nice variety of languages meaning what I am allowed to do and what I am able to do can end up being very different.

Working with testing of a team that has this kind of responsibility isn't always straightforward. The communication patterns are networked and sometimes finding out what needs doing feels like a puzzle to solve where all pieces are different but look almost identical. To describe this, I went to identify different sources of testing tasks for our responsibilities. We have:
  • Code Guardianship (incl. testing) and Maintenance of a set of client product components. This means we own some C++ and C# code and the idea that it works
  • Code Guardianship and Maintenance of a set of support components. This means we own some Python code that keeps us running, a lot of it being system test code. 
  • Security Guardianship of a client product and all of its components, including ones we don't own. 
  • Implementing and testing changes to any necessary client product or support components. This means that when a team member in our team goes and changes something others guard, we go as team and ensure our changes are tested. The maintenance stays elsewhere, but all the other things we contribute.
  • End to end feature Guardianship and System Testing for a set of features. This means we see in our testing a big chunk of end users experience and drive improvements to it cross-team. 
  • Test all features for remote manageability. This means for each feature, there a way of using that feature that the other teams won't cover but we will. 
  • Test other teams features in the context of this product to some extent. This is probably the most fuzzy thing we do. 
  • All client product maintenance first point of support. If it does not work, we figure out how and who in our ecosystem could get to fixing it. 
  • Releases. When it's all been already tested, we make the selections of what goes out and when and do all the practicalities around it. 
  • Monitoring in production. We don't stop testing when we release, but continue with monitoring and identifying improvement needs.
To do my work, I follow my developers RSS feeds in addition to talking with them. But I also follow a good number (60+) components and changes going into those. There is no way anymore Jira could provide me the context of the work we're responsible for, and how that flows forward. 

I see others clinging to Jira with the hope that someone else tells them exactly what to do. And in some teams, someone does. That's what I call my "soul sucking place". I would be crushed if my work was defined to do that work identification for others. My good place is where we all know the rules of how to discover the work and volunteer for it. And how to prioritize it, what of it we can skip for low risks related to others already doing some of it. 

The worst agile testing I did was when we thought the story was all there is. 

Thursday, October 11, 2018

How to Survive in a Fast Paced World Without Being Shallow


As we were completing an exercise into analyzing a tiny application on how would we test it, my pair looked slightly worn out and expressed their concern on going deeper in testing - time. It felt next to impossible to find time to do all the work that needed doing in the last paced agile, changes and deliveries, stories swooshing by. Just covering the basics of everything was a full time work!

I recognized the feeling, and we had a small chat on how I had ended up solving it by sharing much of the testing with my teams developers, to an extent where I might not show up for a story enough to hear it swoosh by. Basic story testing might not be my choice of time, as I have a choice. And right now I have more choices than ever, being the manager of all the developers.

**Note: the developers I have worked with in the two last places I work in are amazing testers, and this is  because I don't hog the joy of testing from them but allow them to contribute to the full. Using my managerial powers to force testing on them is a joke. Even if it has a little truth into it. 

Even with developers doing all the testing they can do, I still have stuff to test as a specialist in testing. And that stuff is usually the things developers have not (yet) learned to pay attention to.

For browser-based applications, I find myself spending time browsers other than developer's favorite and with browser features set away from usual defaults.

For our code and functionality, I find myself spending time interrogating the other software that could reside in the same environment, competing for attention. Some of my coolest bugs are in this category.

For lacking value on anything, I find myself spending time using the application after it has been released, combining analytics and production environment use in my exploration.

To describe my tactic of testing, I was explaining the overall coverage that I am aware of and then choosing my efforts in a very specific pattern. I would first do something simple to show myself that it can work, to make sure I understand what we've built on a shallow level. Then I leave the middle ground of covering stuff for others. Finally, I focus my own efforts into adding things I find likely that others have missed.

This is patchy testing. It's the way I go deep in a fast based world so that I don't have to test everything in a shallow way.

Make a pick and remember: with continuous delivery, you are never really out of time for going deeper to test something. That information is still useful in future cycles of releasing. At least if you care about your users.


Saturday, October 6, 2018

Time warp to the Principle of Opportunity Cost

This Friday marked a significant achievement: we had 5-figure numbers of users on the very latest versions of the software we worked on every single day. Someone asked about time from idea to production, and learned this took us seven years. I was humbled to realize that while I has only been a part of two, I had pretty much walked through the whole path of implementing & testing and incremental delivery to get where we were.

When I worked at the same company on sort-of-same products over 12 years ago, one of the projects we then completed was something we called WinCore. Back then the project involved combining ideas of a product line and agile to have a shared codebase from which to build all the different Windows products from, I remember frustrations around testing. Each product built from the product line had pieces of configurations that were essentially different. This usually meant that as one product was in the process of releasing, they would compromise the others needs - for the lack of immediate feedback on what they broke.

Looking at today, test automation (and build automation) has been transformative. The immediate feedback on breaking something others rely on has resulted in a very different prioritization scheme that balances the needs of the still three products we're building.

The products are sort-of-same meaning that while I last looked at them from a consumer point of view, this time I represent the corporate users. While much of the code base servers similar purposes as back then for the users, it has also been pretty much completely rewritten since, and has more things it does than it did back then. A lot of the change has happened so that testing and delivering value would flow better.

Looking at the achievement takes me back to thinking of what the 12-years younger version of me was doing as a tester, compared to the older version of me.

The 12-years younger version of me used her time differently:

  • She organized meetings, and participated in many. 
  • She spoke with people about importance of exploratory testing with emphasis of risks in automation, how it could fail.
  • She was afraid of developers and treated them as people with higher status, and carefully considered when interrupting them was a thing to do.
  • She created plans and schedules, estimated and used efforts to protect the plans with metrics
The 12-years older version of me makes different choices:
  • Instead of being present in meetings, she sits amongst people of other business units doing her own testing work for serendipitous 1:1 communication. 
  • She speaks for the importance of automation, and drives it actively and incrementally forward avoiding the risks she used to be concerned for. She still finds time to spend hands-on exploratory testing, finding things that would otherwise be missed. 
  • She considers fixing and delivering so important that she'll interrupt a developer if she sees anything worth reporting. She isn't that different from the developers, especially on the goals that are all common and shared.
  • She drives incremental delivery in short timeframes that removes the need of plans and estimates, and creates no test metrics.
Opportunity cost is the idea that your choices as an individual employee matter. What value you choose to focus on matters. You can choose to invest in meetings or on 1:1 communications. You can choose to invest in warning about risk or making sure the risks don't realize. You can choose to test manually or create automation scripts. When you're doing something, you are not doing something else. 

Are you in control of your choices, or are someone else's choices controlling you? You're building your future today, are you investing in the better future or just surviving with today? 



Saturday, September 29, 2018

Experiment away: Daily Standup Meetings

In agile, many of of us speak of experiments. There's a lot of power with the idea of deferring judgement and trying things out, seeing what is true through experience rather than letting our mind fool us with past experiences we mirror into what could work.

Experimenting has been my go to way of approaching things. The best thing coming out of it for me has been mob programming. A talk four years ago by Woody Zuill introducing something I intellectually knew I would hate to do, that ended up transforming not just my future but the way I see my past. With cognitive dissonance - the discomfort in the brain when your beliefs and actions are not in sync - with mobbing my beliefs got rewritten. If I was ever asked for top three advice on what to do, experimentation would be on my list, as well as not asking for permission and stopping list-making to actually get stuff done.

Experiments with practices and people are not really very pure, they are more like interventions. The idea of trying before judging has power. But we keep thinking that for the same group, we could try different options without options expiring. The real thing is however that when we do one thing, it changes the state of our system of humans. They will never be as they were before that experience and it closes doors. And as a door closes, other one opens. Experimentation mindset moves us into states that enable even radical changes when the state is right for that transition to happen.

My internal state: Dislike of meetings

Four weeks ago, our team had just received three new members all of a sudden. Our summer trainee had changed into a part time appearance as school required their attention. My colleague appeared to feel responsible for helping people succeed so they turned to their default behavior: managing tasks on Jira, passing their information in writing and turning half of my colleagues into non-thinking zombies working in the comfort of "no one wrote that in the Jira ticket". We talked about our challenges, and they suggested we need to bring back daily meetings. I recognized my immediate strong negative response and caught it to say: "Yes, we should experiment with that".

I left the discussion feeling down. I felt like no one understood me. I felt like the work I loved was now doomed. I would have to show up in a standup meeting every day, after all I was the manager of the team. I would have to see myself stopping work half an hour before the meeting not to be late (being late is a big personal source of anxiety) and see again how a regular meeting destroys my prioritization schemes and productivity. It being in the middle of the day, I would again start working at a schedule that is inconvenient to me just to make space for uninterrupted time.

I knew I had worked hard to remove meetings from my job (unlike most managers around me, where meetings are their go-to mechanism for doing their work) and now the new joiners were forcing me to go back to a time I was more anxious and unhappy.

It's Just Two Weeks

Framing it as experiment helped me tell myself: "It is just two weeks, you can survive two weeks."

I sent the invites to the agreed time, showed up every day, tried hard to pay attention to sharing stuff of value and focused my energies in seeing how others were doing with the daily meetings.

I was seeking evidence against my strongly held stance.

I learned that there are many different negative reactions daily meetings can bring forth.

  • Some people share every bathroom break they took, every trouble they run into and focus on explaining why it is so hard for them to not make progress. 
  • Some people come with the idea of time boxing and mention always two things. You have to say something, so they choose something they believe they need to say.
  • Some people report to others, some people invite others into learning what they've learned, others pass work forward. 
  • Some people have low idea of their contributions and frame it into saying things like "I tested since yesterday, I will test more by tomorrow" - day after day. 
  • Some people collect the best of the last day to share in the meeting, and hold on to information for that meeting instead of doing the right thing (sharing, working with others) when they come to the information. 
  • Some people are happy with the meeting because they have NEVER tried any other options: pairing, mobbing, talking freely throughout the day, having a culture where pulling information is encouraged to a level where your questions always have the highest priority. 
Only one of us expressed they liked the meetings after two weeks. We did not make them work really well. We did not find a recipe that would bring out the best of our collaboration in those meetings. I could not shake the feeling that I was drowning into my "soul-sucking place", agile with rituals without the heart. So we agreed to stop. 

Things already changed

We could say the experiment failed. FAIL as in First Attempt in Learning. It did not stick. But it changed us. It made the problem some people were feeling visible that we did not talk about the right stuff and help our newbies. It changed the way people now walk to one another, giving us a way of talking of the options to a daily meeting. It changed how the team was sharing their insights on teams channel. 

Things were better, because the experiment opened us up for other possibilities. 
Experiments in agile teams are not really experiments. They are interventions that force us to change state.
So, Experiment Away! We need to be interrupted.

Wednesday, September 19, 2018

Forgetting what Normal Looks Like

Today I reached 2 years at my current job that I still very much love. There's been mild changes to what I do by changing my title from "Lead Quality Engineer" to "Senior Manager", that shows up mostly in whole team volunteering more easily to do testing tasks and enable testability without me asking any differently than before. There are a few reasons why I love my job that I can easily point to:
  • We have team ownership of building a better product. No product owner thinks for us, but some business people, sales people and customers think with us. From an idea to implementation and release, we've seen timeframes in days which is very unusual. 
  • It's never too early or too late to test. Choosing a theme to give feedback on, we can work on that theme. With the power in us as a team, the tester in me can make a difference.
  • We can do "crazy" stuff. Like kick out a product owner. Like not use Jira when everyone else worships it. Like pair and mob. Like take time for learning. Like have a manager who closes eyes to push the stupid approve button that is supposed to mean something other than "I acknowledge you as a responsible smart individual who can think for themselves". Like get the business managers to not book a meeting but walk in the room to say hi and do magic with us. Like pull work, instead of being pushed anything. 
  • We are not alone, not stuck and generally people are just lovely even if there is always room for making us flow better together.
Living a life on the edge of "crazy" to some is fascinating when people don't all have the same past experiences. In particular, in last months we have had new people join who have brought in their past experiences from very different kinds of ways of working: with daily meetings, detailed Jira tickets, thinking for the developer, etc. 

I've been experimenting with finding a better, more fun way so long that I start forgetting what normal looks like. Today I found some appreciation for it. 

At almost two years in the job, I finally started a "Jira cleanup" about two weeks ago. I edited our team query to define what belonged to us: anything marked for the team, and anything marked for any of us that belonged in the team. All of a sudden for those few who had cared for the list based on their past experiences, realized that the list was much more significant that they had realized. About 120 items. We called for an old rule: all other work will wait until we are below 50. We didn't get to it though. Some people cleaned some of things up. Others were busy working on other things. 

Instead of one on one clearing of the personal lists, we called a workshop to share the pain of cleaning the lists. I had no idea what I might be learning with the workshop. 

I learned that half of the people had not used Jira beyond the "look at this individual ticket" use case. Seeing what was on the whole team list - a new experience. Seeing that you can drag-and-drop stuff like post-its - a new experience.  Marking a case into a state or assigning it to a person - a new experience. 

Even with the Jira avoidance I advocate for (fix and forget immediately, track themes on a physical whiteboard) I had not come to understand that I might have missed sharing a basic set of skills for a group of people coming to us from elsewhere, with other expectations of what it would mean.

A healthy lesson of remembering that what is obvious to me may not be such to others. And that building complex things on lessons that are not shared might make less sense to those with less experimentation experiences under their belt. 

Wednesday, September 12, 2018

Devs Leading the Testing I Do

Dear lovely developer colleague,

I can see what you are trying to do. You try to care about testing. And I appreciate it. I really do. But when you try to care about something you don't really yet understand well, it creates these weird discussions we had today.

First I see you creating a pull request casually mentioning how you fixed randomly failing unit tests. That is lovely. But the second half of the sentence made no sense to me. While you were fixing  the randomly failing unit tests, you came to the conclusion that rewriting (not refactoring, but rewriting) a piece of logic that interfaces with the system in a way unit tests would never cover was a good idea. I appreciate how you think ahead, and care for technical quality. But did you at least  test the feature in the system context or just rely on your design working after the rewrite, based on the unit tests?

Then I initiate a constructive discussion on how our system test automation does not cover that functionality you rewrote. I'm delighted with another developer colleague pitching in, volunteering to add that to test automation right away. For a few hours, I'm walking on happy clouds for the initiative. And then I talk to you again: you tell me that while the colleague already started, you felt the need of writing them a Jira ticket of it. And because there was a Jira ticket, I no longer should pay attention to the fact that your changes are in the release candidate we are thinking of giving to customers any moment now, as soon as testing of them is done. I appreciate that you think tracking it will help make it happen. But have you paid attention on how often tracking with Jira means excuse of not doing it for now in our team? And how you assigning work to others means that fewer and fewer of us volunteer to do the things because you keep filling our task lists identifying all sorts of things everyone else could be doing and assigning them around?

The day continues, and I find two features I want to personally spend some time testing. The first one I pick up is created by yet another lovely developer. They didn't tell me it was done either, but my superpowers in spying folks pull requests reveal the right timing for the test. This is created by someone new, so I do more communication than I would do normally. And I do it visibly. Only to have you, dear lovely developer colleague again telling me how I should test it. Or rather, not test it. How there is someone else somewhere that will test it. Obviously I follow up to check that someone else has promised to do, and they haven't. Or they have promised to do something different. Like 1/10 of what needs doing. So I walk over your great plans of leading the testing I do, only to find out that the feature I tested don't work at all. So we talk about it, and you seem to agree with me that this could have been level of testing that takes places in our team. It would, in fact, be testing  that takes place not only in our team, but by the developer introducing the feature. Missing it once is fine. But not making it a habit through feedback is better. I introduce some ideas of exploratory testing on how I will make it fail, once I will first get it to pass once. You show me thumbs up, as if I needed you to approve the testing I do. I love how you care. But I wonder do you realize I care too?

So I move on to the second feature I found, and confirmed that it will not be fully tested elsewhere. And you keep coming back to correct me on what I should be doing.

Could you please stop assigning me Jira tickets that isolate testing from the pull request or the request of implementation you've worked on? Could you let me design my own work and trust that I know what I'm doing, without trying to micromanage me?

I appreciate you care, but we all care. We all have our say in this. Stop leading the testing I do, and start sharing information you have, not assigning me tasks.

I already think you're lovely. Imagine how awesome we'd be if you started believing I can do this? That we all can do this, and that you are not alone in this. That you don't need to track us, but work with us.

The Right Time for Testing

There is one significant challenge working as tester with my current team: self-allocating work to fit the right priorities. With 12 people in the team, significant responsibilities of both owning our components test automation & other testing as well as accepting work into the system we are responsible for releasing in our internal open source community, there's a lot of moving parts.

In times before agile, a common complaint from testers was that we were not allowed to join the efforts early. But now we are always there. Yet yesterday someone mentioned not getting to be in the new feature effort early enough. With many things going on, being on time is a choice one has to make personally, to let go of some of the other work and trust other people in the team (developers) can test too.

With incremental delivery, you can't really say what is "early" and what is "late". Joining today means you have a chance of influencing anything and everything over the course of improving it. There is no "it was all decided before the efforts started". That's an illusion. We know of some of the requirements. We're discovering more of them. And testers play a key role in discovering requirements.

We've been working on a major improvement theme since June. Personally, I was there to start the effort. Testing was there first. We're discussing timing, as I'm handing the "my kind of testing" responsibility to another senior, who is still practicing some of the related skills. They join later than I joined. But the right  time for all of testing is always today: never too early, never too late.

Yesterday I was looking into helping them getting into the two new areas I do regularly that they are getting started on: requirements in incremental delivery, and unit tests within current delivery. 
I made a choice by the end of the day. I will choose unit tests and fixing a new developer. The other will choose requirements and fixing future iterations.

To succeed in either, we cannot be manual testing robots doing things automation could do for us. We're in this for a greater discovery that includes identifying things worth repeating. Exploratory testing feeds all other ways of testing and developing.

The right time for testing is today. Make good choices on what you use that time on. There's plenty of choices available, and you're the one making them even when choosing to do what others suggested or commanded.

Saturday, September 1, 2018

Think Testing as Fixing

"Wait, no! We have not promised THAT!", I exclaimed as I saw a familiar pattern forming.

There was an email saying we would be delivering feature Gigglemagickles for the October release. We had not really even started working on it. We had just been though our priorities and it wasn't on them. And yet someone felt certain enough to announce it.

I started tracking what had happened.
  • In June, someone asked if Gigglemagickles might work from a tester. They reported that they had ONE TEST in test automation that wasn't failing. It wasn't testing the whole thing either, but it wasn't failing. We could be carefully positive.
  • In early August, someone asked me if I thought it should be done, and I confirmed it should if we could consider not building a software-based licensing model that would enable forcing a different pricing. We did not agree we would do it, we just talked about it making sense.
  • In late August, it had emerged into emails and turned into micromanagerial Jira tickets assigned for individuals in my team telling to test Gigglemagicles for compatibility. 
  • Since it was now around, another micromanagerial command was passed from a developer for a tester to start testing it. 
  • As I brought the testers back to what needed to  be done (not Gigglemagicles) we simply added  a combination test in automation suite to know if there was a simple way of seeing Gigglemagicles did not work beyond the one test. 
What a mess. People were dropping things they really needed to do. They were taking mistakes as the new high priority order, without much of critique. 

Half an hour later with testers back on track, we found big problems on other themes that we had promised. Diversion was postponed. 

We failed again at communication. Gigglemagicles alone was probably ok as a client feature. But it depended on a backend system no one asked about or looked at. And it depended on co-existence with other features someone was now adding as a testing task for us, at a time when there was no time available. We had fallen to the developer optimism again: we don't know of implementation tasks, so let's promise it is ready. The bugs we have not yet found are not our concern. The testing undone isn't my problem. 

Let's all learn that we need to think of testing as FIXING. If we don't have things to fix after testing, we could just hand it out now. But we do, and we need to know what to fix. That's why we test. 




Saturday, August 4, 2018

Test-Driven Documentation

I remember a day at work from over a decade ago. I was working with many lovely colleagues at F-Secure (where I returned two years ago after being away for ten years). The place was full of such excitement over all kinds of ideas, and not only leaving things on idea level but trying things out. 

The person I remember is Marko Komssi, a bit of a researcher type. We were both figuring out stuff back then in Quality Engineering roles, and he was super excited and sharing about a piece he had looked into long before I joined. He had come to realize that in time before agile, if you created rules to support your document review process, top 3 rules found find a significant majority of the issues.

Excitement of ideas was catchy, and I applied this in many of my projects. I created tailored rules for requirements documents in different projects based on deeper understanding of quality criteria on that particular project, and the rule creation alone helped us understand what would be important. I created tailored rules for test plan documents, and they were helpful in writing project specific plans instead of filling in templates. 

Over time, it evolved into a concept of Test-Driven Documentation. I would always start writing of a relevant document from creating rules I could check it against.
The reason I started writing about this today is that I realized it has, in a decade, become an integral part of how I write documentation at work: rules first. I stop to think about what would show me that I succeeded with what I intended, and instead of writing long lists of rules, I find the top-3. 

Many of the different versions I've written are somewhere on my messy hard drive, and I need to find them to share them. 

Meanwhile, you could just try it for your own documentation needs. What are 3 main rules you review a user story with? If you find your rules to be:
  • Grammar: Is the language without typos?
  • Format: Does it tell all the As I want so that ?
  • Completeness: Does it say everything that you are aware that is in and out? 
You might want to reconsider what rules make sense using. Those were a quick documentation of discussion starters I find wasteful in day to day work. 

Tests first, documentation then. Clarify your intent. 


Tuesday, July 31, 2018

Folks tell me testing happens after implementation

As I'm slowly orienting myself to the shared battles across more and more roles, throughout the organization as I return to office from a long and relaxing vacation, I'm thinking about something I was heavily drawing on whiteboards all around the building before I left. 

The image is an adaptation based on my memory of something I know I've learned with Ari Tanninen, and don't probably do justice to the original illustration. But the version I have drawn multiple times helps discuss an idea that is very clear to me and seems hard for others. In the beginning of the cycle, testing feeds development and in the end of the cycle, development feeds testing. 


There's basically two very different spaces in the way from idea to delivery. There's the space that folks like myself, testers, occupy together with business people - the opportunity space. There's numerous things I've participated in saying goodbye to with testing them. And it's awesome, because in the opportunity space ideas are cheap. You can play with many. It's a funnel to choose to one to invest some on, and not all ideas get through.

When we've chosen an idea, that's when the world most development teams look into starts - refining the idea, collecting requirements, minimizing it to something that we can deliver a good first version of. Requirements are not a start of things, but more like a handoff between the opportunity space and the implementation space.

Implementation space is where we turn that idea into an application, a feature, a product. Ways to deal with things there are more like a pipeline - with something in it, nothing else gets through. We need to focus, collaborate, pay attention. And we don't want to block the pipeline for long, because when it is focused on delivering something, the other great ideas we might be coming up with won't fit it.

A lot of time we find seeds of conflict in not understanding the difference of cheap ideas we can toy with in the opportunity space and the selected ideas turning expensive as they enter the implementation space. Understanding both exist, and play with very different rules seems to mediate some of that conflict.

As a lead tester (with a lead developer by my side), we are invited to spend as much of our efforts in the opportunity space as we deem useful. It's all one big collaboration.

Looking forward to the agreed "Let's start our next feature with a marketing text writing, together". Dynamics and orders of things are meant to be played with, for fun and profit. 

Monday, July 23, 2018

Life after GDPR

In the last year, offices around the world have been like mine, buzzing with the words GDPR. The European Global Data Protection Regulation became active and enforceable in May.

It's not like privacy of our users did not matter before. Of course it did. But GDPR introduced concepts to talk around this in more detail.

It assigned a monetary value of not caring that should scare all of us. An organization could be penalized with a fine of 4% of the companies global annual turnover or 20 million euros, which ever is greater.

It introduced six requirements:
  • Breach Notification - if sensitive data gets leaked, companies can't keep this a secret. And to know that there was a breach, you need to know who has been accessing the personal data. 
  • Access to Your Data - if anyone has data of you, asking should give it to you, without a cost.
  • Getting Forgotten - if the original purpose you consented changes or you withdraw your consent, your data needs to be removed.
  • Moving Your Data - if you want to give your data elsewhere, they should provide it in a machine readable format.
  • Privacy by Design - if there's personal data involved, it needs to be carefully considered and collecting private data in case isn't a thing you can do.
  • Name Someone Responsible - and make sure they know what they're doing.
Getting ready for all of this has required significant changes around organizations. There's been needs of revising architectural decisions like "no data should ever be really deleted". There's been refining what personal really means, and adding new considerations on the real need of any data belonging into that category. 

In a world where we build services with better, integrated user experience, knowing our users perhaps with decades of knowing their personal patterns and attributes, we are now explicitly told we need to care. 

So as a tester, looking at a new feature coming in for implementation, this should be one of your considerations. What data is the feature collecting, combining to, and what is the nature of that data? Do you really need it, and have you asked for consent for this use? Did you cover the scenarios of asking for the data, moving the data or actually getting the data deleted on request? 

For us testers, the same considerations apply when we copy production data. The practices that were commonplace in insurance companies like "protected data" are now not just for the colleagues data, but we need to limit access and scramble more. I suspect the test environment have been one of the last considerations we addressed with the GDPR projects in general, already being schedule challenged just to get minimally ready. 

We should have cared before, but we should in particular care now. It's just life after GDPR came to action. And GDPR is a way of decoding some rules around agency of individual people in the software connected world. 




Sizing Groups Based on Volume

I've been reluctant to read my twitter timeline for the last few days. The reason is simple. There is an intensive discussion going on between two people I follow and respect, and the way they discuss things looks to me like they are talking past one another and really not listening. There was a part of the discussion I could not avoid though and it was around a claim that:
NoEstimates crowd is a small loud minority.
I don't really have measures, and I definitely don't have enough interest to invest my time into measuring this. But I'm part of that claimed minority, I just generally don't feel like being loud about it. And I most definitely don't want anything to do with the related discussion culture of meanness, shouting, and insults that seem to be associated with that hashtag. The people against NoEstimates come off as outright mean and abusive, and I've taken some hits just by mentioning it - learned my lesson really soon.

You can be loud on a conference stage, with your voice amplified as you're given the big stage. I listened to one popular speaker this spring, who used a significant portion of their talk on ridiculing people in the No Estimates space mentioning them by name, and felt very uncomfortable. It could be me they are ridiculing and I don't think ridiculing even when it makes people laugh hard in the talk is the way we should be delivering our messages, no matter how much we disagree.

Y'all should know that volume is not how you size up a group. And that size of the group shouldn't matter because there should be places where it is ok to do things in a different way without feeling attacked. It's my right to be stupid in ways I choose.

I've been seeking for options of wasting my time on estimating for more than 10 years. You could claim it is about me being bad at it, or not trying all the awesome things. I've been part of projects that did Function Point Analysis and I still think that was awful. I've been part of projects that did work breakdown structures, estimating each item we broke into, but the problem with that is that the items remain vague and instead of supporting a continuous delivery of most important part of value, they have created me big chunks of work hard to split value-wise. We've used past data of all sorts. And I've wasted a big chunk of my life creating stuff that I don't believe to be of any value, just because someone else thinks it will be good and I'm generally open to experiencing everything once.

The question about NoEstimates for me boils down to opportunity cost. What else, again, could I get with the time used on estimating? Are there options that would be better?

15 years ago one Friday afternoon, I sat down with my bottle of diet coke at the office coffee table. I had two colleagues join me with their coffee cups. One of them started the discussion.

"I'm going to be doing some overtime during this weekend", they said. "The product owner needs estimates on this Chingimagica Feature, and they need it for a decision on Monday", they continued.

We looked at the fellow, excited on the Chingimagica Feature, willing to sacrifice a major chunk of their weekend and almost unanimously quoted "Sustainable pace" and the general idea that giving up your weekends was almost always a bad idea. But they didn't mind, this was interesting, important and they had all the info that the product owner would need.

So we made a joke out of it. We took out post-it notes, and the other two of us wrote down the estimate for Chingimagica, each on a post-it note. We did not show our notes to the others, but just said we'd hand them out on Monday when their work of weekend was done.

They used 10 hours to create a detailed breakdown structure, and analyze the cost.

Monday came, and we compared. We all had same estimate. They had more detail to why it came to what it did, but it was still the same.

That was when finding better ways of working became evident.

It is ok that some people feel strongly about estimating, and some of them may be very successful with them. I see the morale decline in projects close to me that focus on estimates over continuous delivery, and feel I need to help us stop paying relevant money for something that hurts people.


Thursday, July 19, 2018

Skipping Ahead a Few Steps

I work with an agile team. I should probably say post-agile, because we've been being agile long enough to have gone through the phases of hyper-excited to the forbidden-word and real-continuous-improvement-where-you-are-never-done. But I like to think of agile as a journey not a destination. And it is a journey we're on.

The place we're at on that journey includes no product owner and a new way of delivering with variable length time boxes, microreleases and no estimates. It includes doing things that are "impossible" fairly regularly, but also working really really hard to work smart.

Like so many people, I've come to live the agile life through Scrum. I remember well how it was described as the training wheels, and we have definitely graduated from using any of that into much more focus in the engineering practices within a very simple idea of delivering a flow of value. I know the whole team side of how we work and how the organization around us is trying to support us, but in particular I know how we test.

This is an article I started writing in my mind a year ago, when I had several people submit to European Testing Conference experience reports on how to move into Agile Testing. I was inspired by the stories of surviving in Scrum, learning to work in same teams with programmers, but always with the feel of taking a step forward without understanding that there were so many more steps - and that there could be alternative steps that would take you further.

The Pushbike Metaphor

For any of you with a long memory or kids that job your memory, you might be able to go back and retrieve one about the joy of learning to ride a bike. Back in the days I was little, we used training wheels on the kids learning biking, just to make sure they didn't crash and fall as they were learning. Looking around the streets in the summer, I see kids with wobbly training wheels where they no longer really need them, but they are around for just in case, soon to be removed as the kids are driving without them.

Not so many years back, my own kids were at an age where learning biking was a thing. But instead of doing it like we always used to do with the training wheels, they started off with a fun thing called pushbike. It's basically a small size bicycle, without pedals. With a pushbike, you learn to balance. And it turns out balancing is kind of the core of learning to bike.

My kids never went through the training wheels. They were scaring the hell out of me on going faster than I would on their pushbikes, and naturally graduating to just ones with added pedals.

This is a metaphor Joshua Kerievsky uses to describe how in Agile we should no longer be going through Scrum (the training wheels) but figure out what is the pushbike of agile taking us to continuous delivery sooner. Joshua's stuff is packaged in a really nice format with Modern Agile. It just rarely talks of testing.

People often think threes only one way to learn things but there are other, newer, and safer ways.

The Pushbike of Agile Testing

What would the faster way of learning to be awesome at Agile Testing then look like, in comparison to the current popular ways?

A popular approach to regression testing when moving to agile is heavy prioritization (doing less) and intensive focus on automation.

A faster way to regression testing is to stop doing it completely and focus on being able to fix problems in production within minutes of noticing them, as well as ways of noticing them through means of monitoring.

A popular approach to collaborating as testers is to move work "left" and have testers actively speak while we are in story discovery workshops, culminating into common agreement of what examples to automate.

A faster way to collaborating is mobbing, doing everything together. Believing that like in an orchestral piece, every instrument have their place in making the overall piece perfect. And finding a problem or encompassing testers perspectives in everyone's learnings is a part of that.

A popular approach to reporting testing is to find a more lightweight way, and center it around stories / automation completed.

A faster approach to reporting testing is to not report testing, but to deliver to production in a way that includes testing.

A popular approach to reporting bugs is to tag story bugs to story, and other bugs (found late on closed stories) separately to a backlog.

A faster approach is to never report a bug by an internal tester, but always pair to fix the problems as soon as they are found. Fix and forget includes having appropriate test automation in place.





Tuesday, July 10, 2018

What is A/B Testing About?

Imagine you're building a feature. Your UX folks are doing what they often do, drawing sketches and having them tested on real users. You've locked down three variations: a red button, a rainbow button and a blue button. The UX tests show everyone says they want the red button. They tell you it attracts then, and red generally is most people's favorite color. You ask more people and the message is affirmative: red it is.

If you lived in a company that relies heavily on A/B tests, you would create three variations and make releases available with each variation. A percentage of your users would get the red button, and similarly for the two other colors. You'd have a *reason* why the button is there is the first place. Maybe it is supposed to engage the users to click and a click is ordering. Maybe it is supposed to engage users to click and a click is just showing you're still active within the system. Whatever the purpose, there is one. And with A/B tests, you'd see if your users are actually clicking, and if that clicking is actually driving forward the behaviors you were hoping for.

So with your UX tests, everyone says red, and with your A/B tests, you learn that while they say red, what they indeed do is blue. People say one thing, and do another. And when asked on why it is the way it is, they rationalize. A/B tests exist to an extent because people being asked is an unreliable source.

What fascinates me around A/B tests is the idea that as we are introducing variation, and combinations of variation, we are exploding the space that we have to test before delivering a product for a particular user to use. Sometimes I see people trusting that the features aren't intertwined and being ok with learning otherwise in production, thus messing the A/B tests when one of the variation combinations has significant functional bugs. But more often I see people not wanting to invest in variations unless variations are very simple like the example of color scheme of buttons.

A/B testing could give us so much more info on what of our "theories" of what matter to user really matter. But it needs to be preceded with A/B building of feature variations. I'm still on the fence with understanding how much effort and for what specific purposes organizations should be willing to invest to really hear what the users want.

Sunday, July 8, 2018

The New Tasks for an Engineering Manager

I've now been through three stages of transitioning to an Engineering Manager role.

First stage started off as my team interviewed me and decided they would be ok having me as their manager. People started acting differently (too many jokes!) even though nothing was really different.

Second stage started when my manager marked me as the manager in the personnel systems and I got new rights to see / hear stuff. I learned that while being a tester I never had to do mundane clicking, that was pretty much core my my new managerial responsibilities. Accepting people's hour reports (only if they insist doing them, we have an automated way for it too), people's vacations and expense reports.

Third stage started when I started finding the work undone while the engineering manager position was open. The recruitment process with all its steps. Supporting new people joining the organization. Saying goodbye to people when we agree the person and the work are not right for each other. Rewarding existing people and working towards their fair pay.

I found an Engineering Managers' Slack group, and have been fascinated with the types of things Engineering Managers talk about. A lot of this stuff is still things I was doing while identifying as "individual contributor".

I've found two weird powers I now have been trusted with: terminating someone's contract is just as easy in the systems than accepting hour reports (and there is something really alarming in that). And as a manager, I have access to proposing bonuses without having to do all the legwork I used to do to get people rewarded.

Officially one month into my new role and now one month on vacation. We'll see what the autumn brings. 

Sunday, June 24, 2018

Not looking for recipes of winning in a game but stopping the game

A long time ago, I learned a little game from James Bach: the beer game. It is a little exercise of trying to do something that people manage on a daily basis, buy a beer at a bar. Yet as the game progresses, it becomes clear that simple things can be hard. Your beer can come in a size you don't expect (you did not specify!), in temperatures you did not expect (you did not specify!) and with stuff you did not expect (you did not specify!). With the rules its played, you can only lose. And the only way to win is to stop the current rules.

Software development has a lot of aspects like this. If we end up in the defensive mode, one part of organization pitted against the other, we can always shift blame around. The estimates and predicting the schedules that are by nature filled with uncertainty can easily turn into a blame game. And from a perspective of an individual team, I can easily find others to blame: it would have taken us a week,  but we were blocked by another; it would have taken us a week, but we thought you meant X when you meant X+1; it would have taken us a week, but then quality is flexible and we did not think of that aspect of quality this time. Like the beer game, this is a game you cannot win. Trying is a sub optimization. Instead I find we need to look at the overall system, beyond an individual team.

Why do we want to have a date available? Because we don't have the stuff we need available today. Why we think the stuff we have today isn't sufficient? Because we believe that if we had more, it would be easier to sell? Why do we need stuff to be easier to sell? Because our sales people most likely are heavily paid on commissions. Is there other ways to set up the sales people's salaries? Most certainly yes.

I'm still in the process of asking why but I already know this: I'm not looking for ways to win in a game we shouldn't  be playing. I want to change the game into something we can all win together.

And for people thinking that "2-3 hours of estimating every now and then, no big deal". In scale, with uncertainty it is a big deal. And with people like me who feel every given estimate with their every night's sleep, it is a matter of health and sickness. I don't actively lie when I know there are better ways of doing things. Thus estimation is not a routine I take lightly.