Wednesday, December 30, 2015

Polarities of test data

I've recently been testing a complete redo of our reporting functionalities, and all in all surprised in how is it possible for a pair of developers to think it works when it does not. Even when there is a clear oracle in form of an existing previous implementation you can test against.

Testing this redo as if it is new functionality but with a simpler oracle has lead me to create simpler test data. I first handle just individual pieces and then move on further into combinations. The main principle guiding me is control: me in control of the data, understanding the basic before going into the complex and complicated.

My testing of the reporting functionalities was interrupted by a release with just a few little updates. All the updates were related to upgrading components, both 3rd party and our apps shared components. These usually cause specific types of problems, so I run a set of explorations around basic scenarios but this time, did not pay attention to data much. I used the data I had created for the reports, simple and controllable.

And a bug escaped us: there's a grid component that we for purposes of one view overloaded for height calculation, ending up with a problem that in other places, scrolling would fail. A classical (for us) mistake of one developer working with the component in one place tweaking it to be for that purpose, and it having trouble then when used elsewhere.

For me the interesting lesson was on data. If I had been on my typical data, I could not have avoided to see the bug. But since I was in a narrow and limited, controllable set of data, it hid the problem.

With continuous delivery though, the problem was shortlived. But it lead me to create two specific sets of data to reuse as part of my checklists. There's never one, but I can try doing smart selections of what I keep available. 

Monday, December 14, 2015

Value in contributing or learning - when to automate tests

I listened to a podcast with Llewellyn Falco today. There's many things of interest, but one worth an inspiration for this blog post.

In one point of the discussion, there is a discussion about failures. The first story Llewellyn tells about failure is having five developers work for a week on programming when the end result of that task is equivalent to 6 hours of manual one-time work. The story is about asking of value in advance. Good lesson.

I have had the privilege of many discussions about unit testing and test automation with Llewellyn. The story reminded me of my perception: with test automation, I often bring forth the discussion about real need of repeating and value of spending hours and hours in automating something I don't care to repeat but could do in seconds. But while the value seems like a relevant driver for me, the point that I feel we end up with is doing things anyway just to learn if (and how) it can be done. Value of learning (and solving a challenge) goes over the value of using time on testing.

There's a rule for unit testing Llewellyn cites a lot: Test until bored. As long as you are contributing to the overall value that testing could provide immediately or in long term, you are not bored. As long as you are learning, you are not bored.

I'm puzzled with this: I can always name many things I intentionally don't test, as there is never enough time for testing it all. I'm painfully aware of the opportunity cost of writing documentation or creating test automation over testing just a little more from the endless list of ideas.

I see my teams developers approach unit testing avoidance with similar arguments that I use on selecting what is worthy of automation on system or feature level. It makes me appreciate the approach test-oriented programmers like Llewellyn take: the challenge of learning is worth the effort.

But I still can't help but wonder: why the same rule of overall value wouldn't hold in the domain of testing as for other problems we solve with programming?

Perhaps because: 
  • adding the first test enables us to add more tests like that with a cheaper cost - repeatability is the solving a particular problem in a particular way in addition to repeating the same test by means of automation
  • if we don't automate because of cost-value discussion (with unknown cost), it's just another trick in the bag of excuses for us
  • we don't really know to estimate the cost before we have a solution, until then we can only discuss if the problem is worth solving at all 
  • the discussion about the unknown cost can take more effort than just doing it
The core difference in the story in the podcast and our common experiences of automating tests are in knowing how the problem can be solved. Perhaps the rule of "contributing or learning" applies in these examples. 

Thursday, December 10, 2015

Birds and statues - or phrases to keep to yourself

There's a conflict emerging - on merge conflicts and how to deal with them.

Sometimes I wonder why I care - I'm a tester after all. But I do care, since the quality of the code matters to what kind of results we can achieve as a team with help of testing. I care enough to nurture an atmosphere where refactoring is not only allowed, it is encouraged. We're not trying to stop change, we're trying to flow with change. And we've been relatively good at that, doing continuous delivery without test automation. Clean code is easier to work with. And in most cases, we have pretty similar idea what is clean code.

It seems there's an exception to every rule. My exception comes in the form of different beliefs. There's one (with 20+ years of experience) who believes that the things I've taught to believe about team work and clean code are incorrect. That we should find ways of working with code by one individual only to avoid merge conflicts - and human conflicts that result in changing the status quo.

When I suggest we wouldn't have a conflict if we paired or mobbed, I'm not being helpful. So I try not to mention that while I still think of that - a lot.

When I suggest we would have less conflict if we had smaller methods and that refactoring would be a good idea, I'm told we should stop refactoring all in all and always just write new code. We can just add fresh branches, leaving the old that worked there as is, still working. And that code style and cleanliness is just an opinion anyway.

When I suggest doing smaller increments that can be released to contain the conflicts, I get a shrug. And I get a bunch of others saying how good a strategy that is, but also remarks on how this area is just different.

When I ask what we could do, I hear we could just work on different areas completely, in isolation. To avoid merge conflicts - and human conflicts. It's worked for decades, what would be different now?

There's a phrase that I managed to keep to myself that I've mentioned before this all became urgent and pressing: be the bird, not the statue. I heard this at Agile 2015 from Arlo Belshee. The one who stays put is the one who gets hurt in merge conflicts in the modern software development. But saying that right now might be again not helpful.  But I think of that - a lot. And I admire the conflict-averse other developers, who increase their birdlike features in all three dimensions of how they deal with the shared code, leaving one statue there to realize implications later.

Friday, November 27, 2015

One experience of teaching kids creating with computers and programming

I've been slightly struggling with time management issues this autumn, having so many things to do and so little time. Today was an end of one experiment I wanted to do, and it turned out great even if different than what I had in mind when I got started.

I wanted to try out teaching 1st graders about creating with computers and programming in a format that is different from free-time voluntarily -based approached of code schools and clubs in Finland. I wanted to start building a way of being helpful as part of classroom training, instead of adding more hobbies on the side. There's still a big challenge ahead in getting all the teachers up to speed with what they could be teaching with regards to programming when it starts in all elementary schools for all grades in less than a year. 

I contacted my girl's teacher and asked if we could do something together. With a little discussion, we came to the conclusion that we could do a few classes with the focus of story telling with the kids. So we started making pictures and voice over to become a video of an alphabet story. 

It was wonderful to see what kinds of story ideas kids came up with from the letters. I was originally hoping to put these together into a CYOA-story (Choose Your Own Adventure), but the timeframe of 2*45 and a class size of 28 students minutes soon put me to a different idea. Just the collaboration would have required more time to make that happen.

The first session had two parts:
  1. Hello Ruby book handout. Every kid in both 1st grade classes got a free copy to take home, as well as the teacher. That was 48 copies. Linda Liukas and her publisher were very nice and enabled a cheaper unit price for the batch, and my non-profit work was able to finance the books as gifts. The kids were very curious about the books and with guidance from me and the teachers, named their own copies so that they would not get lost or mixed up. 
  2. Create a story with picture, and record it. I had each kid choose a letter of alphabet and draw a picture of a story they came up with. I asked them to come to the computer for recording, and showed them around on how to tell their story as a recording. I collected the hand-drawn pictures. 
In between the sessions, I did a bit of work on the outputs of the first session. I paired with my girl at home on some of the work, and taught her video editing basics to cut out all of me to leave just the kids. We scanned the hand-written pictures and included them side by side with the stories. My girl recorded a beginning and an end with a proper microphone and got excited about the idea of singing into the microphone. With the huge number of stories, the editing was quite much work, taking me 3-4 hours. I cut out the silences, and even transformed one word answers to something that resembles a story. 

With that preparation, our second session had three parts:
  1. Creating credits image to end the video. The teacher set up a touching game to add a little randomness to the order of kids coming over to the computer to add their own name into signatures. When touched, it was their turn - and this created a positive sense of waiting for one's turn. With the projector, there was a lot of fun with this simple act: writing on the computer, getting your own name into the work and others reading what is on the screen as it emerged. Every kid wrote their own name with small (non-capital) letters, and watching that happen for a few kids, I realized there was a great intro into the programming we'd do later. I showed how with a computer I can select all 28 names and turn them into words that start with capital letters just with one function. 
  2. Watching the video together. We then watched our end result. Everyone in their turn was looking like they were feeling a mixture of embarrassment and happiness, and the whole group was very focused on seeing their own stories. We talked briefly about the fact that the video is theirs and that I will not put it in the internet nor should they - that things like that should be agreed on in advance and everyone should agree. The class teacher pointed out clarify of speech and how much easier it was to understand what great messages you have to say if you articulate, and I was just thinking I missed a bit of editing when I did not raise the quiet voices to a higher volume. 
  3. Talking and trying out programming. With the video created, we started talking about programming. We talked about someone having programmed the tool we could use to create videos, and programming as something that would allow us to create things we could imagine. We did two exercises from Hello Ruby book.
    We talked about small programs that consist of commands learning Ruby's dance moves. Then we talked about us being the computers running the dancing program and tried it with three repetitions. We ended the exercise talking about doing the moves 1000 times and how computers don't get tired of repeating things you program them to do. And we talked about the idea that we could tell also to stop when we say to stop.
    The second exercise was the debugging exercise that comes closer to my usual work. We looked at two three examples where the little instructions that computers could understand but that missed something. Like the idea that you would put your table cloth on top of your birthday cake, or that you would eat when you're full and say thank you when you are hungry. 
With a lot of laughter, sense of having created something that wasn't there without you, I ended the session. Next up would be hour of code, in just a few weeks. I like those exercises, but I find them one-sided: to become smart creatives, we need to want to create, not just solve given puzzles. I call for freedom and support in implementing one's own ideas, the thing that I remember loving but missing out in school system including the computer science studies in the university.

Creating with computers is the point for me. Programming is just a tool. Sometimes what we want to create require programming. Sometimes advanced use of existing programs. But the thing we should not have is fear. Believing computers do what you want, demystifying them is a core of what I do professionally, whether I feel like being a tester, a programmer or a product analyst - today. Most days I love being a tester too much to ever want to be anything else. 

Another experience in Mob Testing

Today, I had a visiting lecture at Aalto University of Applied Sciences. My theme was Testing in the industry, as it has been for any years in a row when they've invited me to share lessons from the trenches with the new aspiring computer science students. I love going and prioritize it high, after all it's my university. It's where I used to study, it's where I used to do research on testing and it's one of the major places on my career that has given me the space to grow with my interests in testing. 

I agreed on being there for a visiting lecture at XP2015 conference. Since then, the teacher changed, but the agreement of me doing a visiting lecture stayed. On previous years I've shared ideas about how I do my work and what I've learned about testing in these 20 years, but this year with the new teacher we came to another idea. What if I would just briefly tell what I do and how much I love what I do, and let the students experience my joy first hand in a mob testing session! 

We had 1,5 hours (2 university lectures) and about 15 students. When they came into the room and sneaked to take their places in the back row, I invited everyone to take their chair and come to the front of the class in a semi-circle. I had no slides to show, so we started with the application to test open on the screen, Eclipse with the Java-code and a browser window with mindmup in the background. 

We talked about the roles of a practice mob testing session, and the roles I assigned were those of the driver - no thinking at the keyboard, designated navigator and other navigators, that would navigate through the designated navigator. 

I introduced the application briefly. I told it was called Dark Function Editor, and I navigated the first driver as an example to get the application to a point where we could start testing. There's a limitation I would call a bug that prevents the application from working unless you realize not only start a project, but also add an animation within that project that I did not want them to get stuck with. This time I also did not force them to log it, and as I had shown it as if it was expected, no one seemed to consider it a bug. 

We rotated on three minutes, so the group kept moving on quite a high pace and organized to do that very fluently. The first few rotations were focused on trying out some features that ended up being just randomly picked by whoever was then navigating. The first navigators were clearly unsure of what to do, and referred very nicely to the help of their mob, and the group was contributing together quite early in the process. The first rotation was challenging, none of the navigators would volunteer ideas of where to start from with a blank sheet and I reminded that in testing, it's good to remember that if you freeze because you feel overwhelmed, it's good idea to do something, anything. There's no wrong or right when all is a mystery to us. 

The group found bugs, and one of the bugs was something I had not personally noticed before, even if I have now used the same application in quite many sessions. They noticed that if you added right kind of pictures in right order, it became obvious that the preview and actual layout with the order of pictures were reversed. They just got lucky to put pictures in order different that what my list of preprepared material had guided other groups to. I introduced the concept of writing bugs to the mind map. Then they run into things they were unsure of if they were bugs, and I introduced the concept of questions. With concept of questions, I also introduced the concept of color-coding different items in your mind map. With a few rotations, we had a few questions and a few bugs, all around the center item on the mindmap. 

However, as the facilitator I sensed that they were testing without a purpose. So I inflicted a constraint, asking them to focus on creating an animation that they could export, just a basic flow. They were lured by things they'd like to try occasionally, especially the mob volunteering things more actively than could be done and sometimes the driver and designated navigator would look a little overwhelmed. I kept advising that the final call of what gets done is on the designated navigator. With a few detours and me asking what was our purpose now, we got an animation done and some additional questions noted down. 

With four things around the center node in the mind map, I introduced them by taking navigation to the idea that we can have deeper hierarchies. We decided with the mob to have Bugs and Questions as nodes, and dragged things we had built by that time underneath. I knew I'd much rather have feature areas here, and bugs and questions color-coded under the appropriate feature area, but the group was not there yet. They had not yet understood there were feature areas you could name and use for categorization. 

At this point, I gave them another charter as their constraint. I pointed to a particular feature, and asked them to list all functionalities they could find. The feature I pointed to has visibly four buttons, and I've facilitated groups where testers think that they are done having listed those. I was curious to see how this group would do. They did great. 

Because of the projector resolution issues, they couldn't first see there were four buttons, and they paid attention to only two. This lead them by accident to create a good list of things, and they came up fluently with various features related to handling the list. They were trying out single clicks, learned that double click is a rename feature, and that right click wouldn't do anything even if that would be something you'd usually expect in a functionality like this. They accepted the right click and dismissed the information. They tried drag-and-drop and started playing little with the automatic and manual naming of the list items. I sensed they were starting to feel they might be close to done. 

I took over the navigation pausing my timer for the ongoing navigator, and navigated to add more items on the list. After that was done, I asked if people noticed another feature being revealed now - a scroll bar. With this revealed, there was more they learned. They noticed that the up/down arrows would also only work when there was more items, and with those keys they also started paying attention to other keyboard shortcuts. 

At some point someone in the mob made a surprising remark asking if there was an undo functionality. The product use, adding and removing things and having to recreate your test data after someone else had decided that deletion was a good thing to test now probably inspired it. And the group found the Edit menu, with an undo-redo -functionalities that turned out to extend to the feature we were looking at on the other side of the application. I pointed out the excellence of that realization and sharing the idea with the group - it opened the mob new doors of what features there could be, as they could start thinking of connections outside user interface proximity. 

The time was running low, and I stopped the navigator to take over with a piece of advice, pointing out the two buttons they had not noticed. With starting to add those buttons, someone in the mob suggested that perhaps the mind map should be restructured again, as it was hard to make sense of it with this many features. They used the last two rotations on getting started with restructuring the mind map, introducing concepts they had learned through exploration like naming the buttons function and creating a subcategory of actions for the buttons leaving tooltips of the buttons as another dimension to the actions. 

The map created turned out to be this: 

If they would have continued, they would have soon discovered that bugs and questions belong under their modeled areas. They would have visually seen more of areas of functionality they had considered from the listed features, and it would have inspired them to find more. There are still many features I've thought of in previous sessions that did not get listed. But they also had one thing they could name, but I had missed. 

We ended the session with a little retrospective. I asked people to share their observations in a round-robin style and we had little discussions about the observations, addressing e.g. Mob programming effectiveness and if anyone would do this in the industry. Many people pointed out how this mechanism enabled to build on others knowledge and get further than alone. And a few people pointed out how much fun it was. 

As an after-lecture discussion, I had a chat with the lecturer who said he will connect this experience to many of the concepts he will be teaching in the upcoming classes. He also said that mob was a very good teaching format, and I felt I have improved a lot even over the last few months on facilitating testing mobs and knowing when to step in to show a concept that will help learn more without stretching the group too far at once. We talked about an idea to take mob testing to published research articles - an idea I would want to support happening. They've done research on individual testing 10 hours vs. 5 people testing for 2 hours each, and having 5 people test 2 hours together would bring a whole new perspective into that setting. 

If you want to try teaching in this format, I would love to help you get started. And if you know of a team who would like to try this mob testing format with their own software, I'd like to collaborate to fine-tune my training approaches on this and do an experiment. 

Wednesday, November 25, 2015

Getting started with mob programming for User Interface Styles

While we've had some opportunities to try mob programming with my team, there's now an actual theme over a longer term a subgroup of my team will work on. It turns out that the subgroup consists of people who have not tried this way of working before AND that we need to do all of this in a remote setting. I'm looking forward to learning about how it works for us.

We've selected a theme: styles and the UI. There's a few reasons for this becoming a selection:
  • There's a new user interface designer in our team that has not worked with major projects on styles before. So we know there's a significant skills ramp-up needed, and I don't know of a better way to do that than hands-on with a pair/group. 
  • The current user interface programmer has not been strong on discipline. When sensing urgency in schedule, it's just easy to take a shortcut. And those shortcuts pile up. The style code looks messy and it's my personal testing nightmare: fix one thing, and many surprising places break. Cleanliness of style code would help us so much, whereas testing of styles in the extent they've been creeping to is just failure demand. So, I will test while bringing my idea of discipline to the group. 
  • No programmer other than the user interface programmer currently touches the style code. It is too complicated to understand. We want to also transfer back some knowledge of how everyone in the team could work with it and clean it up as they go, and inviting them into the mob seems to be a good starting point for that encouragement.
  • I want to try mobbing over long term and get more experiences in being the non-programmer (turning into a programmer) in a mob. And the others don't resist (at least yet), since the learning / sharing / improving the maintainability just makes sense.
Our styles, while they were CSS, used to be as clean as the rest of the code. Then we hired a user interface programmer who brought in LESS, changed all styles alone so that no others were involved and alianated everyone else by accident. It's not hard to see that touching something as wide and complicated, people do follow the instinct to flee (and leave the work for the specialized individual), even if that negatively affects our flow. I'm just glad testing isn't in this position in my team, it could be. Lesson: work together and share the work not to get there.

We had a "show-and-tell" session for the style code newbies of the team last week. We looked at our logic of structuring styles and I had two thoughts in my head: 1) now would be a great time to get that test-driven CSS lesson Llewellyn Falco could do with Agile Finland 2) reading code is separate from writing code - I know when I see a mess, while creating that mess myself would take a more focused learning effort. I made notes of four things to pay attention to:
  • Use variables instead of the magic numbers. We really would want to keep things together better to follow what goes on.
  • Simplify the overwriting chain. Minimal design over everything we could do. Less is better. And chaining makes up a mess of unpredictability.
  • Scope with views. We have mechanisms to scope the styles, but we mostly keep things global. No wonder I see side effects of changes.
  • No Important! -keyword unless absolutely have to. Use of these in chains have escalated in use of these almost everywhere, to make sure the last in chain will end up being applied. Discipline. 
Today we had our first scheduled mobbing session. It did not go quite as we'd hope. We started with 2 hours timebox and two goals in mind. We would use the new user interface designer's computer and first goal was that we would set things up that were needed for her to do the work. Second goal was styling of a tiny feature with minimalistic styles that would still fit both designer's ideas of beautiful UI (and the team's idea of maintainability, only valuable stuff added).

We only got work done on the first goal. We got the basics tried quickly, sharing voice over Skype and control of the computer over, rotating on four minutes with the idea that our "expert" would skip his driving turn if we were in middle of something where we'd just be stuck without him navigating. With Visual Studio, we soon learned the computer in question had not been set up for the solution we needed to work on. With a few clicks, that was done. So we went to find the branch with the new feature to be styled and run into our first problem. The branch was not visible on the computer in question. We tried connections, no luck. We installed git bash and gui, thinking we could perhaps bring the branches in from the command line. First time installation gave us git bash that kept crashing every time we'd start it, and after rebooting (no effect) we reinstalled. But looking at things from command line did not take us much further. We pinged in more senior developers, still no clue. So we changed computer.

The starting setup on the second computer was faster, and we got to a point of Visual Studio and starting the application at about 40 minutes into our session. We started feeling the frustration of sharing the remote computer, as the feedback sluggishness of mouse movement made doing simple clicks harder than we're used to. We stopped the running application, synched the branch and went to run the application, only to see it crash at start. With theory of and security settings, we turned off the sharing, no effect. So we decided to reboot the second computer too.

While waiting for reboot (it takes quite a while on Windows...) I asked my local colleague whose computer we tried using first if we could try one more thing - something I had suggested quite early on into the problem, something that should not be connected that was dismissed when I suggested it by the expert. We pulled latest of our integration branch and all of a sudden, the information of existing branches was available too. So by the time of reboot and reconnect, we had a working local environment.

Just before our remote expert could rejoin, we tried running the application, with the crash. We tried changing solution configurations between Development and Debug, and got a little further: login screen. But same error at login. The group was getting very amused with the luck we were having, and concluded that at least it was much nicer to be in this together than alone.

We called in yet another senior, to confirm our theory: the new feature relied on database changes that were available only in the developer's personal database. At 1,5 hours, we decided to call this session done and leave the second goal for next session, by which time we'd get the database changes in a common environment too.

A few observations of this session:
  • The first goal turned out to be more complicated than we expected. No one, be it "code newbie", "style expert" or "senior developer / architect" had an immediate answer. Trying and googling were key. 
  • If we would have tried the unlikely idea when it was presented instead of dismissing it, we would have saved 20 minutes. As a "code newbie", sometimes getting heard can be difficult. 
  • Getting stuck and unstuck together seemed more efficient. Alone we would have just left it waiting for the "senior developer / architect" to hunt down an answer he did not have when we invited him - research needed. Together, we worked through the first problem and even managed to solve it. Surely, we could have done something else in the meanwhile. Like read the news.
  • Minimizing the remoteness so that the one that must be remote is but the rest rotate locally seems a good idea with the connection sluggishness.
  • We could improve from  here: the one test that fails the continuous delivery when the application doesn't start would be great for things like this. 
  • Nothing I would pinpoint to my great tester abilities today. Perhaps next time. Next time is next week Monday. So experiment continues and everyone is still with me. 


Tuesday, November 24, 2015

Discomfort of defects

Last week, I was testing yet another new feature. And testing of it was very straighforward. First I tested to learn what the feature could do with simple data. Then I extended the data, and extended the variation I could bring in with the new feature. I made a few checklists to cover through relevant scenarios. It was about a day of work, nothing major.

I found 10 issues. That too is a very typical number. But what was atypical was that this time, probably because of business priority of the feature, all the bugs got fixed within one day.

As I was going through the fixes based on code commits and Jira notes, I noticed an interesting phenomenon. Only 1 out of the 10 issues I had raised had a comment. And the only comment was to say that I had found something that was _relevant_  - a mistake in how the code had been written.

There were 10 changes, so I would argue there were 10 items. But the developer, working towards no faults in his code, dismissed the other 9 and focused on the 1 as his personal feedback.

This again reminded me on how differently we think. For me, all 10 were relevant for the end user, and I really did not care whose fault it is that those exist. The developer worked against the idea that just-in-case he would ever need to defend himself, 9 of them were "new information he did not have when implementing the feature" and that he "had only one code defect".

We haven't classified things for anything but unfinished work for over a year, and still the culture of looking for one's own responsibility remains. There was no complaint on the other bugs, but the feeling can be seen in the comments - what gets acknowledged as "my bug" and what is just work on the list.

Saturday, November 21, 2015

Encouraging self-organization

I was on a train, going back to work from a lunch, when my phone beeped. I look at the screen, to realize someone was pinging me in Flowdoc. That's how my team, working remotely, communicates. I curiously checked what was going on.

One of my developers was asking if I'm around, and I told him where I was. Having a discussion on mobile isn't really a problem, so I asked "How can I help you?".

He mentioned the feature he was working on, that I knew just as well as he does. But what he said next surprised me. He said he has been thinking about the purpose of the feature and why users want it, and come to a conclusion that as he understands the purpose, it will be severely damaged if the end user isn't able to select multiple targets for editing in combination of what he is implementing.

This should be a normal discussion to have, but it really isn't. I can't recall that particular developer ever before reaching out to me for a discussion without me initiating it and never saying much about caring for what the user can do. It's had always been me.

With the discussion of the purpose, he introduces a change he has already implemented to the multiselect feature. It's a change where, me having sat more with product owner, the word around in priorities is that it cannot be implemented, because it would slow down the functionalities around it. His questions are more about how to make that right, not about if making that in the first place is right. I know it was outside our priorities with a strict interpretation. But it is definitely in benefit of the users.

We had a longer chat about other purposes, limitations or risks of the things he was thinking of. The reason he was discussing it with me was that there was yet another connected and very complicated feature, which he had forgotten many times in the past with me pointing out how that had been broken as a side effect. And now he wanted to work it through with me.

With the little slowness that the communication channel created, I had time to reflect what was going on in my head with the discussion. I had two conflicting views:

  1. I want to recognize the progress of initiating contact and showing caring for user needs and encourage that behavior with positive feedback
  2. I want to scold him for implementing a new major feature without discussing priorities with others before implementing. 
I'm happy to notice that I don't follow my second instinct of scolding for breaking the rules. I focus on the positive. The time is used already. We want to go forward and create value. He is creating value. So I dig into the details:

  • I'm happy that he is implementing a feature that users have expressed need for, that we have prioritized out with reasons of performance
  • I'm concerned he did not pay attention to performance in coming up with his solution, and help him work that through. 
  • I want to be helpful in making the feature work since it is apparently coming in soon, even if I did not know of it. 
We add more aspects to the feature, making it even better. With talking to me about a specific problem he was having and felt need to discuss, he also tells me he found a solution. Without me ever saying much about my concerns, he remembers to close with the idea: I only spent a day on this feature. 

I'm happy he did. And I wish we would never make our developers feel they can't take the initiative. Rules are meant to be bent. And that's a hard one for me, as a Finn (we love rules) and as a tester (yep, gatekeeper lives deep in me still). 

Thursday, November 19, 2015

My approach to being a Speak Easy Mentor

My first Speak Easy mentoring process is now over. Ru Cindrea, my mentee and my friend, has delivered her first international talk at Better Software Conference in Florida. I could not be more happy for her. She did great. She is great.

If you don't know what Speak Easy is, go look it up. It helps speakers get started with speaking through mentoring. The punch line is diversity. For me it means women. It means people who wouldn't talk otherwise regardless of gender. It means new stories and new opportunities to learn through other people's experiences. It means growing new experts locally, and sharing them with the world so that there will also be strong European voices. Mentors are people from the community, who volunteer, each for their own reasons. It's all free and voluntary.

I'm now through one mentoring process, and half-way in another, and my approach to this is starting to shape. I'm very curious how other Speak Easy mentors do this, so I thought to share my way.

It's really a process

I seem to take my mentee through steps, that turn into six Skype sessions of varying length. From my very first assigned mentee (who got lost somewhere), I learned to agree on the next step while completing the first, and to rely on Skype meetings rather than emails.

Step 1: Introductions

In the first meeting, I tell about who I am, find out who my mentee is and talk about the process my way. I commit to being available and set Skype face-to-face as the mechanism. We talk about the minimal deliverables - like you don't need your slides early on, but they DO help clearing out your abstract a lot if you work on them early.

We talk about what type of talks my mentee would find relevant. I share my bias towards personal experiences over great ideas / theory, to probe for compatibility.

We talk about conferences that are out there, and their special characteristics. I'm usually aware of submission schedules, so I can share those. We look at testing conferences, and software development conferences if those seem fit for my mentee's interests. I point my mentee to speak easy site, but also emphasize that it has just conferences that reserve a special slot for Speak Easy applicants. You can also apply directly, with others.

While I have an idea of steps forward, I introduce only the next step. I leave my mentee thinking (mind mapping) about what experiences would be worth sharing, and what lessons those would deliver.

With Ru, this step was a funny ad hoc mixture. We were in the same online space at the right time, when it was time to submit for the Speak Easy quota. We realized that while Ru has spoken locally, she hasn't internationally. So we fast-forwarded to a specific conference by chance.

Step 2: Finding your talk and places to submit

In the second meeting, we go through my mentee's ideas of what to talk about from experiences. I share my insights on what in those ideas excite me, which I like best and think would fill real needs conferences have - all from my perspective. I emphasize that if something I'm not excited about would be something my mentee is excited about, that is still the one we should go forward with. Speaker first. The conferences will make their calls on choices and my view might be very different than theirs. And there's the right place for every talk idea in the world. Some belong in international testing and software conferences. Others belong in local meetups. And they might grow up to belong to bigger arenas too.

Learning to speak locally is a great thing to do. I emphasize local speaking as an opportunity to practice. Fail (and learn) in small scale, safe environment. And you won't fail anyway, just stumble a bit occasionally.

With Ru, finding her talk was a step that happened before I became her mentor. We were traveling and talking about inspiring lessons in projects, and I was trying to understand her experience about signal detection theory. We shared amazing stories about the worst bug she has had to deal with, the intimidating "reproduce or get fired" -scenarios that sound like a bad movie and how many troubles with dealing with bug reports just made sense after taking BBST Bug Advocacy course. I knew she needed to deliver this somewhere. So the idea was there when the deadline to submit was approaching.

With my second mentee, I was very proud of her sharing five selected topics and her experiences in those. We prioritized them together. As the next step, homework was to work on the title and description.

Step 3: Creating the abstract

In third meeting, we work on the abstract. I have the idea that it could be that I review the abstract and give feedback. But more often it is about finding the essence and motivation of the talk so that we can together get that on paper.

To find the essence, it's either about mindmapping or about discussing around slide outline, without yet focus on the slide specifics.

I'm learning to pair better with the abstracts. My second mentee made me particularly happy asking to create the abstract together in the session, instead of resolving to the write-review cycle. Perhaps she is a particularly good agile tester, and that shows in her collaboration skills.

There's still a bad habit I have on me joining the writing, writing options and contributing my ideas that way. There's still too much magic in the thinking that happens when my fingers touch the keyboard. The magic vanishes sometimes when I pair. Pairing is a skill.

The ideal case is that the third session happens before the submission process. But it could also be that the third session is, like with my second mentee, after being accepted to deliver the talk in an international conference, with the expectation that the abstract needs to be improved.

Step 4: Creating the slides

In fourth meeting, we usually work on the slides. There's a lot of homework for the mentee before this. Most often the skeleton of the slides starts to form clearly by this time. But how the message is structured might be very ripe. Typically we talk about making the talk more lively with stories. We talk about splitting the messages so that they can be digested better by the listeners. We talk about storyline, contents of each slide and the number of slides in relation to the idea of style of delivery.

With Ru, she just pinged me when she had draft slides available and I read them, knowing her story already and filling the blanks. I left comments, and we talked a bit over Skype. And later she pinged me again having changed them completely without my comments, and again we discussed feedback.

Step 5: Deliver the talk to me

In fifth meeting, I volunteer to listen to the talk. While I have been speak easy mentor for only a while, I've offered similar services to people before. Talk, deliver, I listen and give feedback. Real audiences tend to be polite and avoid negative feedback, I will speak about that too and work through ways of improving.

We talk about feeling, perceptions and take-aways. It's safe. It's just me. And we both now it's not ready, it's not what you're measured against. It's just practice, it's for improving.

With Ru, she delivered the talk to me and her colleagues at Altom simultaneously. We had great reflective feedback of what we liked and what could still be improved. None of it was absolute. Feeling, observations, ideas. Use what you find useful. Discuss, reflect and find your way.

With other people, I've had sometimes several deliveries. Seeing the talk grow through feedback has been very rewarding. It was always there. It just needed to be revealed. And I could help with that.

I advice people to practice with others too. I push my mentees to local user groups. I had one set up in Helsinki for Ru, but scheduled turned out to not match, so other people (including myself) used the session for practice of international talks. With these sessions, there's usually more time for discussion, and you can actively invite people to give you feedback. Or you can just look at how engaged you feel they are during your session. Or both.

And then there's the ultimate practice: delivering the talk. I wish I could be there. But I might just be around twitter, following what people say and bring out as their lessons. I admire Huib Schoots for showing up to see his mentees perform. He had a few of them at TinyTestBash. That's presence and support.

Step 6: Post-conference mentoring feedback

When it's all done, I want to still close with a bit of reflection. How did we do and what did we learn? Mentoring is always a learning experience for me too.

With Ru, I learned that some conferences focus on word count and style - and that there are guidelines that no one mentions when submitting, but when you miss them, the feedback can be harsh and hits both of us.

With my second mentee, I've already learned that being available with deadlines of submission and me volunteering does not always work out. And when the "improve" email comes, it hits both of us. I hope it softens the blow for my mentee. And I've learned we should really focus on doing, not talking - pairing is great.

Other reflections

Now that I wrote out my process, I also notice there is another difference. I find my mentees from twitter and I register with them for Speak Easy. Others seem to find their mentees through Speak Easy, they have a matchmaking service. I enjoy working with people I've met and believe will deliver great sessions as we'd had a chance to chat before. So I tend to not let Speak Easy know of my availability, it's always subject to inspiration. If you would want me to mentor you, a good advice is to ask me directly. I will if my bandwidth allows.

In this picture, I prefer the outer route. With end points, I've now had one of both. One using the Speak Easy Quota, one using regular conference submissions. 

Speak Easy is great. Thank you Fiona Charles and Anne-Marie Charrett for setting it up! 

Obsessing on facts

I've learned to feel strongly about facts. Sometimes, even obsessing about facts. What is true and what is not? When the relative rule applies - true to someone - and when that difference matters?

Facts are important enough for me that I've driven friends crazy discussing for nights in a row if the stories we tell in conference talks need to be factual as much as the storyteller can, or if the storyteller can change details as long as they stay true to the story. That is still a mystery to me. I'm getting closer to the acceptance that our memories will tarnish the facts anyway. So it ends up as set of questions of ethics, that are not so straightforward. The relative rule applies again.

Facts and perspectives into facts is something I work with as a tester. I'm a big fan of Laurent Bossavit for the leprechaun hunt he does on software engineering, dispelling common myths. I would love to learn not to distribute myths or create ones.

Some days ago I saw a retweet in my tweet stream:

This is so fucking important. It should be retweeted and shared 10000 times
I needed to click the picture open and read the text. I had plenty of opportunity to stop and think before retweeting. And I did. I realized it could be an urban legend. But I decided to retweet anyway, because in between the lines the message I wanted to retweet was not the facts written down, but the underlying story - unfair boxing of people with labels.

Soon after, I got a response from a fellow tester I appreciated: 
I came back to think about this, because people still days after the fact keep retweeting my retweet, and twitter is kind enough to tell me about it. Where is my responsibility with the added information I have now over the time of sharing it?

This is how leprechauns are born. So I'm thinking: which leprechauns matter? How did I end up obsessing over facts with software project stories on stage when I cared so little with facts here?

Perhaps obsessing over facts is a learned trait, and I just need to work on my heuristics on when that might be appropriate. 

Wednesday, November 18, 2015

Technically Speaking and Public Speaking Goals 2016

Technically Speaking -newsletter included a challenge: share your public speaking goals for 2016.  Looking at the examples like this, I decided to do mine following the format.

Technically speaking is one year old. I think that is about the same age as Speak Easy, my favorite diversity program. I love both initiatives. Technically speaking seems to grow newsletter first (useful stuff!), and Speak Easy seems to grow coaching first (amazing new speakers created within such a short timeframe!).

My Public Speaking Goals for 2016

  • Give a high-profile keynote
  • Publish 2 talks as videos online that wouldn't happen in conferences 
  • Less talks at conferences, just 3 (scheduled 2 already)
  • Some workshops at conferences, as paid work (at least 1)
  • Coaching 2 new speakers - finish one in progress and start one new

I want to transform my focus to being a paid speaker - a professional speaker that does invited keynotes and paid workshops. I realize I will not be able to resist temptation to be somewhere just for the fun of it though.

Some of my 2015 Public Speaking Highlights

This was my "year of international conferences". Adding all that I did this year to Lanyrd, I learned this:

And that is not all. That is conferences. Although, that includes some events from the past, so only 13 events and 7 countries in 2015. 

Some numbers collected inspired by the example: 
  • 33 delivered sessions 
  • 2 keynotes and 2 opening presentations on a single-track conference! 
  • 13 different conferences in 7 different countries, 2 continents
  • Breakdown for conferences: 7 CfP / 6 invited; 2 conferences were #PayToSpeak (did not cover travel+stay and was not local)
  • 6 talks in meetups
  • 1 peer conference with talk, and 3 open space conferences with a session (not counted on list)
  • 1 webinar
  • 6 training courses (public & private) 
  • Coached 1 new speaker through SpeakEasy to deliver her first talk and she did great! 
I also organized quite many sessions and let others speak. My organizing theme of the year was #TechExcellence - Agile Technical Practices, in addition to the usual #Testing -community organizing I do. 

And in case anyone was wondering, I did all of this on the side of my job. My employer let's me be out of office for trainings 10 days a year. So my typical conference experience is double shift to get the work done and weekends to compensate lost hours. Not a recommended practice. 

My lesson learned: doing this long enough creates some level of fluency. It's time to take it to the next level and learn to "Talk like TED". 

Sunday, November 8, 2015

Teaching in a mob format

Our #TestBashNY workshop was Collaborative Exploratory and Unit Testing and we had a great time with our mob of 20+ participants. I feel that every time I teach in this format, I learn more and wanted to share a few of the insights that run through my mind.

I *see* you

For years, I've been running similar courses with individual and paired exercises. Comparing experience with those to the experience of running similar contents as mobs (one computer, one driver, group navigation through a designated navigator in strong-style), my chances of helping the participants advance in their skills are amplified.

In pairs, you learn from your pair. I, as the instructor, can't see you. I can see your results if I ask for a debrief. But I can't see you test. In a mob, I see the clarity of your thinking and your insightful approaches. I see the understanding of your purpose. I see if you have the structure and the focus. And I can jump in to give you experiences you wouldn't get without this. 

Seeing more people test is giving me new motivation to find even better ways of teaching what I've learned through practice. It has brought me to ideas of teaching different level groups differently, leveling knowledge over time. 


When there's a new concept to introduce, I can take over the navigation. I'm still playing with the ideas of when this is appropriate. 

On the workshop day, Llewellyn took over navigation for finding the right place in code our insights from exploration were guiding us to add unit tests on, and creating appropriate seam to get into it. The participants took turns in driving, and supported his navigation as from the mob. This was a natural selection, when so many of participants did not identify as programmers. 

In our previous version of the workshop, I took navigation while exploring by joining the mob while given constraints on what I was allowed to do: clean up the mind map and document things others had seen but missed, or what was still there hidden to be revealed to a tester eye. Showing the example to a group of non-testers changed how the others in the mob did the work. Examples of good work are powerful, and going through someone else's hands, the pace of going through them is more appropriate than in a demo. 

Handling surprises 

Since on this workshop we're turning insights of exploration into unit tests, there's a relevant element of randomness to what insights the group ends up producing in exploration. We ended up looking at automatic naming of elements in a list this time, and it was code we had not had to work on before. Thus we got to search the codebase for the right concepts, try out debugging to see if we indeed were where we wanted to be, go through series of refactoring to find the functional logic from the mess around it to get it under tests. I find the experience quite powerful, and the lesson relevant: after the half-an-hour of digging the functionality out, adding more tests was easy. The first takes most work, and enables them building variation on top of it. 

Teaching more

This style of teaching made me want to teach more. It made me think how well this would adjust to company-specific trainings where we could explore your own application, finding problems, getting them under unit tests and fixed. If you might want to experiment on these with me and Llewellyn, please get in touch.  

Sunday, November 1, 2015

Mob Programming the Halloween Game Hackathon Experience

I spent my weekend in all-women Halloween Game Hackathon. For 2,5 days, close to 30 women got together for a coding event to form group with friends and strangers to build Halloween-themed games. The results are available on the event site.

This post is about my experience into all this. It's an experience colored by the fact that for 2,5 days, I was a developer.

When I created my name tag, this is what I said to talk to me about:

I actively avoided mentioning how little I enjoy programming in my every day work. I avoided talking lovingly about testing. I just did not actively volunteer info about what I do at work. I did not want to volunteer my labels. I wanted to be a developer amongst other developers. And that I did.

The organizers had placed us in teams based on our applications. Each team had a mix of experiences in programming. I did not know any of the three ladies I was assigned to work with, and was looking forward to it. Like I had written to my little name tag pumpkin, I was eager to work in mob format to really create together. It wasn't that easy though. The one of us that represented herself as the most experienced developer was strong on her opinions about what the game would be about (violence) and that she wouldn't want to work in a mob format. After deciding the basic directions on Friday evening, I was already preparing to just pay 30 euros (no-show fee) and leave - I would not be spending my weekend bossed around or left on the sidelines when there was such opportunities to learn from each other. Money was not the relevant investment - this cost me two nights away from my kids -time, as the days were long and they couldn't come home for the nights at all.

The never-before-programmer in my team encouraged me to not give up yet, but just go with it on Saturday morning. On mobbing I got back to the piece of advice I was given before: mob with the part of your team that wants to do that, and with that advice the three of us that were on site early on on Saturday went with mob programming, regardless of initial hesitation. I talked about how much more important it was to me that our never-before-programmer would feel she is an equal contributor in writing the code, than getting as much of it done as we could if learning for all of us wasn't the main goal. And the two ladies went with my request on "let's try it then" promise.

With 4 minute rotations on who was on the keyboard, we quickly picked the rhythm of just continuing where the previous driver left off. We used a javascript game engine that none of us had used before, and the mob format was particularly powerful in agreeing what we wanted to do and learning how that could be done. We solved our problems together, and the three of us owned every line of the code together.

There were moments that I was particularly delighted about as I find them insightful experiences:

  • We needed to find a way to do our game timer. We were working on an idea of needing quite many lines, when our never-before-programmer said as a joke: "Why can't there just be getTimer that gives us that". The others picked up on this: "Let's create a function for that" - and the unclear many lines transformed into very purposeful transformation of our intent into doing just what we needed.
  • We run into a particularly tricky problem with the game engine, and I was ready to give up and move to another feature. The persistence of the experienced developer in the mob kept us all on task, bringing in ideas on how to approach this. And bringing in yet another developer, we got the ideas to get away from our blocking problem with an actual solution - not the code we were missing, but the idea of how to work with limitations of the engine. 
  • The fourth group member came in later, and suggested to break down the mob and get everyone developing on their own. By this time, the trio said no, because we had experienced  how much more powerful we were together. 
  • By Sunday morning, our never-before-programmer was very skillfully navigating for implementing new (similar) features while I was driving and our third developer was taking a break.  
  • Throughout the weekend, our software was release-ready feature by feature, and we did really well on finding the next step that made sense to implement. We were about to go into planning the features, but managed with just outlining things in a mind map, to find more pressing features while getting some of the game done. 
  • The fourth group member (most senior developer) ended up contributing just graphics and a bug fix. Understanding what the game engine code logic was without the shared experience the mob had was difficult in the limited time. The individual contributor also used a lot of our time in waiting for her resolving merge conflicts and asking us not to check in while she needed to get the stuff in git. We would have gotten more done just as a mob. And her expertise could have been useful in solving the biggest and trickiest problems we needed to solve if she volunteered to join us over doing own things by herself. 
  • We failed with communicating between the mob and the individual contributor. The graphics brought in were not in synch with the idea we had had about what the game was, and the rest of us were left out on deciding what they would be about. But we adapted - to create less value than we could have, if the communication would have worked.
  • I made it through the weekend without ever mentioning how little I code, and was really contributing equally for the solutions. For the first time, I was an active navigator and not just filling in an insight here and there. That is because every line of code was created together. 
It was a fun experience, that taught me a lot again. It gave me yet another perspective into how great mob programming is and how well it actually dwells into the problems of team formation without specifically targeting those. 

The biggest takeaway for me was very personal: I learned I know a lot more on programming than I give myself credit for. 

Friday, October 30, 2015

Confidence playes a role in testing

It does not take a lot of confidence to say that a typo fix in code (or resource files amongst all the code) is on the smaller risk side of changes. Doing those changes rarely I tend to move slow and review what changed both when changing and when checking in. I build the system, and check the string in it's use. Sometimes I think that's what I could expect developers to do too, and most of the time they do.

But sometimes, when the change feels small and insignificant, we grow overconfident. There's nothing that could break here, surely? Just a typo fix!

I know I have accidentally cut out the ending quotation mark. I have introduced longer strings that don't fit into the screen. I have fixed a text resource from point of view of one location, where it's used in many places and other has issues. I have wanted to fix a "typo" of missing dots on top of a, to learn that it's not a typo, but scandinavian alphabets can't be used here. Also the typo to me could be not a typo for someone else, like US vs. UK English.

So surely, I am aware that nothing is safe. But I would not agree that I cannot myself do the related checks, have the discussions. And if I would work on a high-risk application (which I don't), I would probably still involve someone else with my analysis. I don't look for a one-size fits all solution, I'm against one.

But in my context, it just makes little sense to have everything checked by another person. It somehow boils down to confidence, which reminded me of an insight a friend shared from Lean Kanban Nordic a while ago:
 I'm not very confident when I go fix typos in code. I spend so little time with code. I know what I'm doing, sort of, but I'm also very aware of things that could go wrong. My lack of confidence makes me double-check. But it does not make me include another person just in principle. Then again, I also know that I don't have to ask anyone to look at the code after my change, the version control alerts the developer on my changes and almost certainly someone is looking for a chance to point out if I could learn something from my mistakes.

The discussion also lead to this question:

Here's where I am confident. I don't think my existence in (this or any other) team is based on the lack of ability to check their own code. I've seen how brilliantly my team's developers test when I sit with them, silently, without making them do better testing with my advice. My presence, my existence and my aura of higher requirements seem to be enough. It's not (just) about skills, it's about habits and differently developed interest profiles.

I regularly let developers (like it would be letting, they always test their own code, there's 10 of them and just one of me) test their own code. Sometimes, I specifically speak out loud about putting their code into production without me testing it, just to assess the risk and remind about our agreement: my effort is something we add on top of all the testing they've already done, it's exploring for new information. And it's best if we can do that together, so that the information of how you do this sticks around when I'm not around.

They don't need me. They benefit from having me.

They get faster feedback on complicated things end users might (or might not) report back to them.
They avoid building some things that aren't going to be valuable because I get heard when I speak business and end-user languages of needs, concepts and value.
They get to rely on a close colleague on asking questions or pondering about the choices while implementing as they learn more of what is possible. That colleague is available more often than the end users and business people, and knows the product by learning more about it, hands on and all around the various channels developers don't find as fascinating.  
They get regular positive feedback when they excel in creating good things. I see what they really do and compliment on actions they are doing. They don't have to be perfect now, but they get recognized when they improve.
They get encouragement to practice, to get better, to share things with me and with each other.
They get praise of building great software, knowing they would not have built it as well without the deep feedback I helped them gather.

They don't need me. But they tell me that I'm a catalyst, that makes us all better. That I voice out hard things they wish they would know to say. And that together we're just better. I'm just different. I bring other viewpoints to the table. And I'm confident enough to no longer have to measure my value in bugs in Jira or lines of code changes I check.

Too much or too little confidence are both warning signs. With a bit of ping pong in between and a dose of healthy criticism, there's a great opportunity to learn. Sometimes you succeed, sometimes you fail, but you always learn. 

Thursday, October 29, 2015

You don't need a different person to test what you did

If it is unclear to anyone reading my blog, I'm very much a tester. Nothing I do outside the domain of what testers usually do changes the fact that I love testing, I care for testing and I want to learn about testing. There might be days when I'm the product manager. There might be days when I'm the programmer. And the more people tell me that there's things testers don't do, the more I go and break the wall that is just imaginary.

On the other hand, I've been a tester for 20 years. I work long, intensive days learning my craft. I pay attention to how I test. I pay attention to where my ideas come from, collect perspectives and ways of getting me into perspectives that everyone else in my team misses. And I'm getting pretty good at that, and yet there is so much more to learn about how to test in a world of unknown unknowns, too much information and myriad of connections. It's the best thing ever. And it's hard work.

It's hard work that haven't given me time to do manual regression testing, because I keep coming up with new perspectives (and I think manual regression testing can also be done by developers, increasing the likelihood of it turning into automation). It's hard work that haven't given me enough time to learn to be as excellent in programming as I could be if I had chosen differently. But testing is my superpower, it helps my team do awesome stuff together. It helps our product manager come thank my team about delivering consistently working software, and it lets me be proud of my developers who invite my contribution (even if still way too slowly) and actively act on the feedback they can get from me.

You become great at testing by choosing your focus like a tester chooses. Practicing testing. Programming is fun, but it's different. It creates different thought patterns that the focus on bugs, value, business and systems. To build great software, we need both thought patterns, preferably in close collaboration.

I was again tweeting today about the stupidity of my own thoughts in hindsight, realizing how much effort I've wasted (without creating any additional value) by logging bugs into bug database about user message typos, over the option of skipping the logging and just going in and fixing them. I've discussed my way through the layers of resistance to get access to the source code only fairly recently, still causing regular stress to developers who see I've changed a file they consider their own to fix typos in strings. But that takes us forward. It's time-saving for us all, that I approach different problems differently. The risk of changing a string to be printed to two letters shorter has implications, but just as developers are able to check their changes, I check mine. The tester in me is strong, it has no problems overpowering the programmer me.

But when I tweet about things like this, my colleagues in testing remind me of how much my mind has changed. This is a great example:
I too used to think the relevant bit was having two people, a programmer and a tester. I used to think there was things a tester should do (and nothing would stop a programmer from taking role of a tester, except skills)  the check your work part.

What I've learned, looking at things in more detail and with less abstractions, is that the changes we do into code are not all the same. For non-programming testers (one of which I still consider myself to be most of the time) the changes in software to make it work can appear more magical than they are. And stopping the mysticism that surrounds software is very much necessary. When I know from hundreds of samples the impact on our system - rehearsing again and again, delivering continuously to production - that user string fixes are often safe, I don't need to go ask some different person to play the role of tester for me. I know I'm biased about my own work, but not everything needs to be treated the same. That's one reason why I love the idea of context-driven.

Find your rules. Learn about the weaknesses of them. Sometimes (more often than not), explore around them. But always, always be aware that in whole team of us developing, time used on one thing is time away from something else. Handoff from one person to another is more work than handoff from the programmer in me to the tester in me. The choices we make are supposed to be intellectual. The best the bright people can make together.

You don't need a different person to test what you did. But someone should test the change. Just like I don't test every change from developers, they don't need to test every change from me. We can ask for help when help is needed.

Wednesday, October 28, 2015

A bug that taught that we're implementing too much

There's a bug we've been analyzing that is emphasizing things in a relevant way.

At first sight, it appears to be just a resizing / layout issue. An extra white box that emerges when you resize your window.

At first, we let time fly by with the high-level discussion between two developers on what could cause it. One, responsible for styles appeared convinced it would require functional changes to fix it. The second, responsible for technologies appeared convinced it would require style sheet changes. With the unclarity on needed skills, it wasn't going forward while the second person did not have time to jump in.

Finally the two paired up and started investigating. I felt very proud looking at them triangulating the cause by removing factors and simplifying the problem to understand it better. No rushed "let's try this" or "maybe this would fix it", but building an understanding before jumping into conclusions more. We had already our share of conclusions aired while not investigating deeper and speculation isn't the right thing here. They removed all self-built styles from the component we're using, and the problem disappeared. So we now we know the second developer was right.

The main insight, however, is the discussion our team ended up with next. We realized there was hundreds of lines of style code (in Less) that made very small changes to the (relatively nice) styles the commercial component ships with. And our aspiration to tweak all the details of the layout was causing us to use, repeatedly, a significant amount of time testing, debugging and fixing problems. What if we would redesign our approach to styles from maintainability viewpoint, how would that change things? What if we actively would have less of implementation, since many of the tweaks in styles are not driven by the end-user value but the possibilities of all the things we could do.

From a small fix, we're going into a shared work of creating less software to maintain. I find this way of thinking insightful to enough to note down and share. We're a small team, we can't afford to not react on maintenance burden.  But it has been a long route of added collaboration and shared ownership to get to a point where these discussions and lessons naturally emerge, and are taken forward.

Is this testing? I might think so. It's all of my business to bring forth the cost of testing complicated structures to get them simplified.

Monday, October 26, 2015

Don't #PayToSpeak, join European Testing Conference as a speaker

I'm an idealist that works hard to change the world. European Testing Conference is one tool for me to change the world as we know it - the world of conferences. And the aspect I want to change is having to #PayToSpeak.

When you #PayToSpeak, many presentations have a sales pitch knack on them. Not all. But some. There's some reason for someone other than the conference organizer to pay you to show up.

I go to quite a number of conferences. I review conference proposals for some. Some of them I have very high respect for and would recommend to others without a blink both as participants of conferences and as speakers. These two tend to be connected though. Good speakers create better experiences from the contents to the participants.

The #TestBash series (TestBash, TestBashNY, TinyTestBash) are on top of my list. The amazing Rosie Sherry works with integrity unlike any other, pouring her heart into making the events great both for participants and speakers and succeeds for years in a row. Rosie's conferences don't make the speakers pay for speaking: she covers travel and stay, and her actions show she realizes how much work speakers (like me) put on their presentations. It is only fair we don't #PayToSpeak even if we are not paid to speak.

The second category of conferences I vouch for are ones where the community is so strong that the contents turn out great, even though the speakers will pay for showing up. I'd like to recognize CAST and Nordic Testing Days in this category. I have paid to speak at CAST (and it costs a lot to travel there from Europe, and my employer does not pay for me!) and I could speak at NTD since it is so close by. I hear Copenhagen Context and Let's Test might be similar, but I have no personal interest towards them so far, for very different reasons.

The third category is conferences I have spoken at, but would no longer do that unless invited (and paid for). These include EuroSTAR, STPCon and other typically commercial events. I get that the commercial success is partly based on volunteering speakers, but I also believe that it means they get a very biased view into the world of testing. It's sales oriented and new speaker oriented, new speakers that still seek to invest in having their first mentions of reputable conferences under their belt.

Fourth category are conferences where you pay entrance fee to speak. These are usually framed as from community to community. Sometimes they appear like commercial conferences (like XP201x), sometimes they are open space conferences. For open space conferences, I get the idea that everyone pays the fee, but it tends to be cheaper and extends then to everyone.

European Testing Conference seeks to join the first category. We believe that great speakers with practical messages to share should not #PayToSpeak, quite the contrary. So we pay the travel and stay. And when we are financially successful, we also create a model of paying honorariums for the work. Creating a presentation is a lot of work. It's valuable. It's is the second main reason people should join conferences. The other is to meet the community. But content we confer around is relevant.

Have you already let us know about your interest to speak and the story you would have to share? Look at our call for co-creation. And if you are not a speaker, did you get your ticket already for learning from some of the greatest speakers we can find under the case of paying them instead of making them pay? We've published 3/4 keynotes and the ticket price goes up again at the time all of the speakers are announced, so get yours now.

A course on testing, pairing and mobbing

I had a great time delivering my Exploratory Testing Work Course in Brighton before TinyTestBash last week. My goal on that course is to teach people to distinguish some critical self-management techniques that make a difference for better quality exploratory testing, mainly keeping track of both the threads of details and higher levels of planning and backtracking, in a combination that is right for you personally, in the frame you feel you are in today.

This is a course I've done a lot of times pairing people for the five sessions of testing. This time, I did something different based on what I've learned on how to get everyone in the class to learn better. I had people pairing Strong-style in the morning and test in a Mob Programming format in the afternoon.

Strong-Style Pairing on the course

I've had people pair before. But this time, I was very specific on how I wanted people to pair. I asked one to be the driver, who would have the keyboard but who is supposed to keep listening to what the navigator says, no own decisions on what to do but always check with the navigator. I asked the other to be the navigator, who would actually be the tester, but with access to the keyboard only through the driver, no touching the keyboard. In strong style, all ideas from one head must go to the computer through someone else's hands.

There's essentially two things that while  testing go on the computer:
  • using the program
  • making notes
I looked at groups doing this, and most groups instinctively took notes by the one that was not on the computer. The trouble with that is that while it may seem faster, it removes the feedback loop on if you actually agree on what is being written and creates distance in the pairing. 

In the first session, I suggested that people could change roles with ideas. Most groups did not. So on the second session to improve it, I introduced a must-change rule of four minutes that I called out. 

All the playing with pairing was not to teach people on the course pairing, but make sure they teach each other testing by really sharing the activity through strong style pairing. 

When I called for observations, my own observation was that people paid relationally more attention to pairing and how different (better) the style was and how they had no idea that there were different styles to pairing. We were on an exploratory testing course though, not pair testing course so I was hoping for pairing to give me a way to teach testing (have the pairs teach in pairs) but people did not vocalize much of that in observations. So it was  good I had something different planned for the afternoon.

Mob Exploratory Testing on the course

In pairs, I can introduce rules and hope people will follow them. I can try mixing up people so that the pairs end up diverse, but usually course logistics give some limitations here. But I can't see what goes on in detail in each pair, and teach them better testing. Mob format is different.

In Mob format, I can step in as navigator whenever I feel I need to show / teach something for the whole group, in the context of what we are trying to test now. I can make sure we as the group stick to a given charter and at least intentionally divert from it when we do. And everyone in the mob can contribute ideas to make this one task's output better.

For a course, it is a big mob but since we had half a day, that is not a problem. I preferred having all 18 in the mob over the style I use in shorter conference sessions where I choose a subgroup to do mobbing and others just observe. Everyone gets their time on each role more than once, and everyone can contribute on the hard tasks. 

I handed out Mindmup document as the place to take shared notes on, and someone from the group asked if it would be better if someone else would take the notes. This is a question so common in mob testing, that I need to learn to address it better. Shared notes are not the same thing as private notes, and all created stuff is supposed to go through two people as mobbing uses strong style pairing too.

In change of modality from pairs to mob, I also changed the application we were testing. The reason is, as I told my group, is that I've seen people with previous information on application on the courses move from testing for new information to showing off what they already know, and I wanted to level that knowledge.

I introduced a planning oriented charter of identifying what there was to test in a very specific part of the software, and I watched the group learn that by testing it. Sometimes they would see something but miss it, and I would step in to make a note of that in the Mindmup-document with the driver. It was interesting to see how the task turned hard when obvious things had been noted, and the mob still kept contributing more ideas, finding hidden features using common conventions of where you could find functionality in a user interface.

We also worked on more of a detailed testing oriented charter, only to run on a  bug that I had not seen before. We changed our task to logging that bug properly, and it turned out to be the most difficult thing we had done all day. As a mob, we needed to agree what we were reporting and to what audience, and the format brought out well the diverse opinions in the group for us to discuss.

Thoughts for future

I'm thinking between two options for future setting of this course. Either I will do it again as this, since people get to test so much more in pairs, or I will spend the whole day mobbing to take the whole group deeper. There would be so much of testing I could help everyone get better at either by me pairing with them, or by me facilitating a mob for them.

If you feel you could teach testing to others, try teaching in a mob format. It gives you a whole new power in helping your students out of their specific problems. And let's face it: every student deserves the chance of teaching something new to their teacher too. In testing, everyone has special insights. And sharing those is the most awesome thing I can think of, today.

Saturday, October 24, 2015

When do you take a joke too far?

I have a heuristic that I use nowadays. When I feel I should not write about something because it's sensitive and could be just my view, I go against my instinct. There's a corollary: some things like that are better to be dealt with not publicly and sometimes there is a fine line between what to blog on and what to deal with by email. Blogging is more of a self-reflection instead of an action.

I just had a great time at TinyTestBash in Brighton. So many amazing people. So many great discussions. Inspirational new speaker talks. And an overall sense of belonging.

But there's one thing that left me thinking. There's a TestBash meme going on with one particular person and a tutu. Tutu, as in a ballerina skirt.

This meme was around in the main TestBash to an extent that the person at the heart of the meme included it into his talk, wearing a Desmond Tutu -hoodie and made remarks of not wearing a tutu the skirt. It was all fun and good, and everyone seemed to be taking it as a fun thing.

The tutu-theme continued with TinyTestBash. A tutu was made available for the person at the heart of the meme, and he again refused to wear it. But this time it was different. I felt it was at the brink of too much. It might be just me who thinks this way and I'm transferring my feelings on someone who has none of those.

Here's my line of thought. If I was a constant center of a joke that I considered funny at first, I could feel very uncomfortable when that joke turns out to be the thing that defines me to new people. And at that point, I have two options. I could get visibly upset and tell everyone to just stop it. Or I could laugh along, but find it just less funny. Kind of like the laughter I do when I get to hear very gendered jokes about my gender. Not funny, but not laughing is a worse option socially.

I think we might need to stop to think when we take a joke too far. I borrowed the tutu from him and wore it for the day.  Then again, tutu on me is normal, not funny. Just for the fun of it, I could wear a tutu for my next talk TestBash NY, just to show that the tutu has moved on.

I think we should stop to think if we're about to take a joke too far when the joke becomes the thing to talk about with that particular person. And in case we are,  how do we change the joke so that it becomes positive in a different way. The TestBash-spirit brings forth wonderful jokes and memes, like the TestBash briefs that we saw handed out this year. There's a time for every meme. It might be time for the tutu meme on a person to go away or transform into something different.

Tuesday, October 20, 2015

From test of the need to building the program

Today, October 20th, is the last day we sell Super Early Bird tickets (350 euros) for the European Testing Conference. All of our tickets are cheap in relation to the conference contents we're setting up, but this is ridiculously cheap for a 2-day professional conference in a high-end location in Bucharest, with great international speakers.

Two weeks ago we set out to test the need of this conference and support in the community by setting a goal of people showing us they want us to do this by buying the tickets. On the last day, we are at 85 % of our goal, with belief you will take us above our set limit. Two weeks ago, I was feeling moments of despair fearing what the decision to test would reveal. But it is revealing that you're with us.

We've started our call for co-creation. That is to say that this event is from us the organizers as part of the community to the overall community. We think we know some great speakers as we follow our craft intensely and have been around. But we also know that our sample is the visible contributors from around the world. We need the community to help us find them so that we can reach out to them, inviting them to share with us for you.

We do not limit ourselves to call for co-creation. We seek the best speakers and contents, both with you and without you, co-creating it with the speakers. And we believe we can do this, because we have set our conference uniquely to pay the speakers for the work they do. With this, we're changing the world of testing conferences that sometimes appears to mostly enable people whose companies have something to market.

A few people have asked why buy the ticket without knowing the content. My answer is to look at the list of organizers and the sessions we do in conferences around the world, that sets a bar (which is not low) on contents we will be offering and decide on whether you would trust us with your money to be invested in the best possible testing learning experience combining testing as we know it as developers, testers and analysts.

We've intentionally revealed so far only that Linda Rising will be with us to encourage us into the days of practice. Linda is the author of a book Fearless Change, and speaks from a vast experience of being around a while to the hearts of people. When I first heard her speak at Turku Agile Days, I left the room crying and I wasn't alone. Her talks move people and change the world. And now she joins to change the world of testing with us.

I will make sure I can say as positive things about every one of our speakers, not just the keynotes. That's why we co-create. We get to know you. We want to help you shine. And through this, we all win. We're creating a balanced conference of practical testing, with an agile slant as we believe in fast feedback. You'll want to be there.

Oh, the normal ticket price is now available: it's 750 euros. The Super Early Bird price is available only today.