Tuesday, March 31, 2015

Learning by Doing: a conference volunteering experience

In addition to my ambition to become the best tester I can be, I’ve had a secondary goal to support the first one. I’m getting very close to being able to call myself a professional event organizer. The number of events I’ve put together is large, ranging from small group events to professional conferences and parties for 2000 people. I’ve been the main organizer for Tampere Goes Agile (140 – 200 participants) and Helsinki Testing Days (500 participants), and I’ve had great time in the EuroSTAR program committee as one of its members. I have never been paid salary for organizing, it’s always been a hobby.

 Organizing is my means to reaching all the wonderful stories that help me grow as a professional. It has taught me a mechanism to ask for things I feel I need, as I’m not alone with my interests. There was however something I really wanted to still learn about. I wanted to learn more about how to put together a huge conference and include hundreds of volunteers in the organizing. There’s no better target in Finland to learn that that Slush, the wonderful startup conference.

While some other people might ask a Slush core organizer / volunteer responsible for a lunch and chat, I do things differently. I volunteered amongst the hundreds of people, everyone 20 years younger than me and served as one of the infodesk people for the duration of the conference. And I got to see what it really is – and meet wonderful younger people who are active enough to change the world a little in their turn. Also, I got to experience how invisible that position made me in the eyes of my colleagues who I would see participating and not recognizing me at all in the unexpected position and attire. I never ever wear a beanie hat - except now I did.
 
Here’s my observations from within with lessons learned:

It’s about hierarchy and overstaffing
Slush volunteer recruiting is hierarchical. First they recruit, usually from personal connections, the team leads. For example, there were three info teams, each with their own leader. There was a clear vision of volunteer teams that they would staff. And having a team that was rather too big than too small was a way to manage the no-show risk with volunteers.

When something needed to go up, there was a clear hierarchy of who helps with what and whom to escalate problems to.

Responsibility of an area creates self-organization
There were a lot of things the info team would not know beforehand. I found it quite admirable what my team lead did. She communicated an overall vision of what we’re responsible, and introduced tools to make notes of what we had learned in first hand contact with customers. We were all encouraged to use our best judgment, as the answers would not be always available. There was a customer problem to be solved, and many solutions would do.

Personalized communications
During the whole volunteering experience, I got only a few generic emails that were sent to all volunteers. Instead, the communications also followed the hierarchy. Messages were sent to team leaders, who in turn personalized them for their own teams, no forwarding. This in particular is a huge difference to most of the conferences I get to participate in in a speaker role.

Working in shifts
Over the timeframe of conference, each volunteer was expected to work about half of the conference time, leaving the other half open for participation. Thus you got a free conference entry for volunteering. All day-time conference volunteers were free by evening party, and had access there.

Training the volunteers with style
Slush had Bruce Oreck run workshops on how to handle customer connection and create unique customer experiences. It was partly motivational, but focused also on tools on how to be yourself and deliver things your way. The contents were one thing, but the fact that they have a “big name” training the volunteers surprised me. Someone had really put some thought into that.

Both the training sessions and the info sessions with informal parties afterwards seemed to build a sprit of exclusion: those who were there, were treated as if they are special. And they are.

The amount of volunteers
There was a crazy amount of volunteers, several hundreds. During the conference, I heard that the volunteers don’t usually just show up even if this year is easier, but recruiting is work that they do continuously. They visit different schools to talk about Slush. They work hard to make volunteering cool. The trainings, the parties, being in the great event, and a work certificate are all things they’ve decided on.

Recruiting volunteers in order of unpleasant work With the different teams, the first team to recruit was the team that I found that had least pleasant job. Not sure if this was as intended, but would make sense. I probably was not the only volunteer to say no to carrying stuff around, instead later on when contacted about customer-facing position in the frontline, I would say yes.

Monday, March 30, 2015

Job titles: It matters what we’re called



At TestBash 2015, Martin Hynie delivered a talk that stick with me. While he said he has not written it as an article as he wished to tell the story in person with all the possibilities of misconception, I will paraphrase the bit that I got from it.

Testers to Business Analyst to Testers

He run an experiment to find out what would change if the name we’re called “testers”, would change. Investigating a few options that might describe the testers work to non-testers, they ended up renaming testers to business analysts.

What happened with that has left me thinking already for two days. Their services were perceived more valuable and with a little time, managers started thinking that instead of renaming, things got better for hiring new people with new skills.

As they renamed themselves back to testers, there was resistance on taking away the great skilled contributors. They had been there before; they would be there after.

The Manager’s Dilemma

The bit that scares me most about Martin’s story is the managers who forgot that these people had been employed in the company for years just with a title change.  Think of it this way: the amount of daily insight for these managers on how development (and testing) work must be ridiculously low. If the managers are engaged in the actual work, they see what contributions happen. They would know their employees as people, not as labels. Daily insight is important.  We need better managers, not just for testers, but software professionals of all sorts.

So what?

For some people – like myself – the identity that comes with the title “tester” is very relevant. I’ve written many times before about how uncomfortable I’m being called a developer, let alone a programmer. Because I’m a tester, I found other testers and a community of people I learn from. I learn from non-testers too, but the emotional bond with people who love the same things I love is incredible. Because I’m a tester, I’m allowed and encouraged to start my thinking from another angle in relation to my programmer colleagues. Because I’m a tester, I’ve become what I am professionally and I love my work.

I too could call myself a business analyst: I am a business-facing tester. But instead, I’ve chosen to go into the discussions about “I’m more valuable than you” and argue for my value, invite my teams to experiment with what I can deliver when being invited to where they think I don’t belong and I feel I’ve won many of my battles. As a tester, my salary is just as high as my developers. 

I’ve spent years learning with the community, sharing and listening, working things out together to be where I am today. Being the type of individual that derives her energy from people around her, I could not be happier of the route I’ve ended up with. And I still see a path ahead that allows me to grow and learn more without becoming anything other than a tester.

I talked to a few people at TestBash who felt they were not as lucky as I have been. I heard stories of testers not being allowed to be away from previous project’s end game even when a new one is starting, without testers. I heard stories about continuous belittling, having to hear you are not to be invited because everyone else already know enough without you. Stories of being “just a tester” and people projecting a lot of negative images to that, regardless of how wonderfully skilled you are. Martin’s story was one of those, in a form of experiment to find out if just a name would make things different.


Friday, March 27, 2015

Being played with temporary trust

Listening to Karen Johnson talk on asking questions and making the other person feel comfortable, I had a sudden flashback. A few months ago, I was learning unit testing pairing with a developer friend. It was one of those high anxiety moments, I felt powerless and out of control. I did not know what I was doing, I did not know what we were about to do. Very much out of my comfort zone.

After the experience, I felt good, relaxed and like I had learned and achieved something. Perhaps that is why it takes me months to realise, in a middle of a conference talk on how smartly I was played in that situation.

Dwelling in my emotions, I was being very impatient. I had a feeling this isn't going anywhere, and I should just stop. As of now, I realise it must have been intentional what he did: he asked me to trust him for 15 minutes and we would stop if I still felt like it. 

It's actually a really smart move on relieving anxiety: time-box the experience, invite temporary trust to allow time to build long-term trust.
The reason I started to think about this is that I'm very uncomfortable learning intentional methods to play people. Collecting ideas is great, but I don't seem to work well with trying to learn a method, it just freezes me. I just thought that the thing I could take from the presentation I was listening to that it's ok if there's a piece that sticks with me. Even if I would need to learn to ask really good questions or be very precise on how I use my words, being methodological about it makes me uncomfortable.

I really hate it when I realise in the moment when someone uses a method on me and remember back to managers who apply things they've been trained to do. I wonder if I'm just being difficult. 

Why should a tester care for unit testing?

I run a workshop session together with Ru Cindrea today at #TestBash on TDD with Lego Robots. It's a session that from a facilitators point of view turns out to be very different in an agile (developer-majority) conference than in a test conference. Right after the session, one of the participants came to chat about things and asked a question:

Why would a tester care for unit testing? Isn't unit testing really developer's business?

There's a few reasons I see why I care, and I thought sharing those might be useful for some other testers too.  Yes, test-driven development or unit testing is typically something developers do - an integral part of development. Yes, test-driven development is more about design than about testing. But there's still great number of things make it worthwhile for testers to get their basic understanding on test-driven development in some condition.

Reason 1: In software development, it's about all pieces together building a full view

If as a tester, I'm totally unaware of what my team's developers can do and have done with unit tests, I will not be able to adjust my focus based on that information. I will miss the information I can get now, knowing at least a little of what their unit tests include through reading them. And without that information, I will do different decisions on where to allocate my limited time.

While unit tests and system tests serve very different purposes, they both co-exist (with a lot of other mechanisms) in developing software. We tend to see different aspects in small scale than when looking at user flows.

Reason 2: Without unit tests, everyone suffers so perhaps I could help

As a tester, I've noticed that in projects where developers don't do unit tests, two things happen: 1) features flow through development very slowly, as developers analyse and test their changes, and 2) testing on system level slows down as simple problems are found. The problems are most often features that vanish without an intention, as the code did not hint about the feature. It's hard, if not impossible to remember later on all the intentions we have built into our code.

To help with unit-testlessness, knowing what they are about helps me help the team in getting them done. Sometimes I help by talking to managers on how important it is to organise time for developers to do unit testing. Sometimes I help by suggesting what the unit tests could include that they are currently missing. Knowing about unit tests help me talk to the developers about things I'd like to see tested with code, and together we can figure out if there's a way to monitor my concerns on the level of unit tests.

Some developers find unit testing really hard. Without a gentle nudge, they won't spend the time to learn to overcome the difficulties.

If I declare unit tests a developer concern only, I miss a lot of collaboration opportunities.

Reason 3: Change and readability are things that I find relevant

The software I work on has the habit of changing a lot. There's continuity with the work we're doing, and a never-ending queue of more work when the previous is done. Change is something to embrace. But when changes create a tone in the team that suggests that tests are on the way of change, it's good to know how to respond. I've been through this type of discussion numerous times: when every change invalidates all the tests, it seems like a waste of time to create new tests to be soon thrown away.

The tests are not supposed to all break on a focused change. If that is the case, there's something wrong with the tests. And a simple observation that the team seems to be throwing away tests can be a helpful discussion starter.

I can also help with readability of the test. I don't need to understand every detail to see if the names of the unit tests make some sense, and allow me, especially with support from a developer in a team, to understand what the tests watch out for. I can suggest better names for things, or go even further: work out how to do good unit tests and lead by example, changing what was there to something that is more readable when we get back to changing this code a year later.

Reason 4: Test-driven development is my best hope for relevant unit-level test automation

The more I look into unit testing and its dynamics, the more it appears that teams that learn to do test-driven development over after-development unit testing, get their tests to stick better. While it appears to be painful at first to learn (so many developers tell me they hate TDD as it just won't fit their ways of developing) when learned, it produces unit tests with a regular pattern.

There's so many obstacles for adding the tests later on when the code was not designed keeping testing of it in mind. When you just got the thing to work, changing (and potentially breaking) for adding tests creates an incentive not to add tests that were too hard. And the level of what there is stays low.

Reason 5: Collaboration (pairing, mobbing) changes responsibilities

When everyone in the team pairs with other team members, or the whole team does mob programming together, the idea of who does what blends into a new mix. Testers are expected to participate in these activities. And all of a sudden, the discussion through defining the driving unit test on what a clear api call would look like are not as far-fetched as they were before.




Wednesday, March 18, 2015

Love of test automation

I've always been big on sapient, thinking testing. Thinking testing includes smart use of tools, to extend your reach to whatever you cannot do without. But I've also been very sceptical about value of automation with regards to the overall value you get from testing in general. And I've played well with the developers in my team to have ideas they will help me implement, allowing me to focus less on the programming stuff myself.

Some organisations I've worked with have put a significant effort into regression test automation, with poor results. If there is regression, the automation does not catch it. It is always in the wrong place. It eats up effort to maintain to remain useful. And it might not have been as useful as its cost in the first place. One of my "most successful test automation" experiences in an organisation for years was automation that did not test anything but allowed the organisation to have the courage to dump the manual tests they had listed - being very proud of lack of regression in production. That's a success story of exploratory testing and great development team. Automation played only the part of changing cultural elements that prevented good work.

But recently, as much as I may dislike it, I've started seeing little pieces of evidence that conflicts my core beliefs about test automation. And being a good researcher (empirical evidence FTW), I cannot just close my eyes from them.

My work and how I see it

I love testing. But over the years, I've started to realise that I love software more than I love testing. The things we do with software are amazing. The business we build on software are enchanting. When I choose places to work in, I ask a lot about the value of the software I would be contributing to. Testing does not exist for testing. I would not choose to work on a product I did not believe had potential to be financially feasible, valuable, meaningful.

To increase the value, more and more places are seeking into ways of shorter delivery times, incremental value. And this is great, from the point of view of loving testing it enables someone actually caring for the information testing provides, when there's points where the feedback can steer without being disruptive to other goals.

With incremental deliveries, in comes the question of repetition in testing. I've been fortunate to work with teams where "everything that used to work breaks" is not the norm. Developers have been pretty good at dealing with that. They have often dealt with it by thinking hard and long, as they have not had (unit) test automation to support them. And with hard thinking combined with understanding of domain and reading of code, the side effects are not the main effects. The large numbers of issues to deal with result more from developing new features that are not quite there yet.

I've now spent almost three years with the same product and team. I blogged earlier about the idea that I don't run the same tests but always vary things to find new things while having a good chance of noticing the old things too. Believing I don't run the same tests and actively working so that I don't is a core to staying awake and alert. This reminds me on an article by Bret Pettichord on Testers and Developers thinking differently and one of the tester strengths, tedium toleration. Thinking the way I do is my way of dealing with repetition and the risk of being bored - I reframe. But three years is slowly starting to get under my skin. I've found great things for wanting to use the product in versatile ways, things that automation could not find. I would look at most of the same things even if automation existed. I could only hope to stop for different problems, or take more risks in just not using some things for now at all. Could experiment more with not testing instead of different testing for the same areas.

From an organisational point of view, I must seriously think if my organisation would be better off in long run if they had more automation (and much much more broken product in production). Especially when I'm gone, and they may not have the skill to find another great tester to join them as there's forces driving "documented test cases" unless you really have a vision of what better in testing looks like. With the same effort, could I or should I have chosen a different path if my own interests wouldn't drive me to exploratory testing?

Recently, we've updated pretty much the whole technology stack - every feature relies on new dependencies. We've rewritten things for better maintainability (still without proper unit tests, which is why I call it rewrite instead of refactoring) so that features are not the same implementation as they were before. We do these changes one thing at a time, continuously, requiring continuously testing. And with these styles of changes, the developers start introducing regression by losing features / scenarios we are surpassingly supporting.

Right now I'm sure that since a lot of the features we are losing make sense to me with product experience, the company would not be in a good state when I manage to leave - something both me and they know will happen sometime soon enough. So putting the list of features into a living specification that resides there, in test automation is an effort we're getting deeper into.

You can't have it all, but the choices in the order of things are beliefs. And seeing where the beliefs take us is just interesting.

A friend with great results

A friend I look up to has been working recently with automating tests from a specific idea. The idea as such is old: the vast number of environments that mobile applications bring in is a special challenge. The product in testing is a financially successful, established business. So it is clear they too could not only survive, but thrive to this point without automation that covers such a critical aspect of testing.

What seems to be different though is that unlike before, there is now a chain of services that makes ideas like this more practical. Imagine supporting all the different smart phones out there - there's more coming in as I write this. Instead of having a farm of your own you carefully select, what if someone's business is actually running the virtual farm for you? The same idea has been wonderful with browsers, and to me makes even more sense in the handhelds world. And yes, I love Finnish startups.

Using a virtual farm to run the automation on and seeing the actual valuable results from even the very simple "let's see the app launches" types of tests cross platform left me in awe. We can really do things like this. Things that are useful. I just wish she can show the world the great things they've accomplished. But things like this - from other sources too - make me question my stance on things.

The test automation we did 10 years ago isn't the test automation we do today. Old experiences may be invalidated with what has changed around us. For some organisations at least.

Reminders of the world changing

A week ago, I listened to a discussion about teaching kids programming. There was a point in discussion that left me thinking: developers all around the world are taking software into areas it has never been before, doing unseen things with it. Some of the data that is currently being collected, will be used in ways we can't yet foresee. The world has many smart people, putting serious effort into (partially) transforming human-intensive processes with automation. Programming is just automating things.

With automation of any kind, the existence of smart thinking individuals does not completely vanish. But it transforms. I've enjoyed reading about the google car that actually cannot cope with change in the driving settings as calculations to be fast enough rely heavily on built-in maps of the area the car operates on. But those things still are driving around, and problems related to those are being actively thought of.

Perhaps, just perhaps, I should start more actively thinking about how can I as an exploratory tester put more effort into helping my team turn more and more aspects of testing into programmable decision rules. Full automation or partial automation, but forward a step at a time. I find great value in taking steps away from programmatic thinking, seeing things differently.

With these thoughts, it's time to go back to reading Cem Kaner's partial oracles materials. Great testers could help make sense in better partial oracles for automation so that we get closer to good results from a testing point of view with the automation efforts. Regression is such a small concern in all of this.



Tuesday, March 17, 2015

When there's testers and testers

Last week, I delivered a talk at Scan Agile Conference (you can see it here, the online timing of editing slides / videos into view isn't very good though). From discussions that followed delivering that talk, I learned something.

There's testers and testers

In an agile conference, I talk mostly to an audience who are not testers. Most of them though, have experiences of testers. Experiences of testers who are as far from what I talk as testing as they can imagine.

There majority of experiences of "testers" people seem to describe are very script and plan oriented. They may create the scripts themselves or expect the answers to be handed out to them as ready made scripts. They have a list of "test cases" they run and an idea of expected results to monitor. There's a small degree of exploration in the sense of pointing out other problems than what the script mentions, should they run into those. And some even take steps away from the script, finding a little more. But essentially, scripts still drive them.

Them these types of "testers" enter agile and the expectation of fast feedback, the experience is that they struggle. They always run out of time, starting their pile of scripts from start, middle or end. Their ways of choosing what to run first may seem disconnected from what is actually happening, mostly because those who look at their work, "the developers", feel these people never come and ask what is going on, what was changed. They may or may not have other means of knowing how to adjust, but the feeling is they don't. They just keep repeating same manual checks. And they deliver feedback too slow to be useful in the agile pace.

And then there's the other kinds testers. Testers I like to call exploratory testers. Testers who put together testing and checking in a smart package on the level of the team, and are able to deliver useful feedback and compensate for the lack of technical practices a team may have to enable delivering quickly, even continuously without automation.

What should we call ourselves then?

I associate very few testers that I personally know of, who are truly in the first category. I think a lot of the negative feelings towards these "testers" come from the fact that organizations still don't understand how to enable good testing, and culturally drive great people to stupid scripting and avoiding communication. When you've absorbed a culture like that for decades, changing may require effort some people no longer cope with. But a lot of the "testers" I know of,  just wait for the permission to do great work.

There are forces driving "testers" into dumbing down their work: ISTQB certification scheme in particular. Old-fashioned ways of managing. Ideas of test cases, testing as creation of artifacts instead of seeing it as performance it is.

To distinguish how testers like me (liberated from the old style management as I can explain why and how things should be done) are different from "testers", I've chosen to call myself skilled exploratory testers.

Renaming the others to checkers?

I talked about the idea that my community (context-driven) separates checking and testing, where checking is the fact-checking with programmable decision rules. That leads to the idea that a lot of the "testers" people seem to have met, are manual checkers.

However, when you have an idea in your mind of what is testing, me changing your word to checking just won't work. Instead, offering a new concept for what I'm adding to the common world view seems to help more with communication, in my experience.

Deprecation of the term Exploratory testing

I love the article James Bach and Michael Bolton just posted on Exploratory Testing 3.0 - where in context of Rapid Software Testing terminology, Exploratory Testing is just testing. But instead of trying to transform the words people around me use into this coherent set from a testing point of view, I'm still inclined towards going with Exploratory Testing 2.0. There's more non-testers around software, and I find trying to fight for the testing craft's chosen vocabulary a battle I might not want to spend time on.

I would use this:
"We now recognize that by “exploratory testing”, we had been trying to refer to rich, competent testing that is self-directed." - James Bach and Michael Bolton
I can't (perhaps yet, perhaps ever) define all testing as exploratory. I still see the pace of adapting a major difference. The more we have scripted, the more likely we seem to be on relying on those scripts when working under schedule pressure of releasing.

I see too much of ISTQBish non-exploratory testing all around me, creation and maintenance of artifacts that don't drive focus in a positive manner (opportunity cost...). And I see a lot of smart automated checks around me. The latter makes the first obsolete, I hope. But there's still the exploratory approach of learning while working on the software, that needs to stay. Mixing up that with what people think they know is testing done by "testers" does not seem to help me with what I'm trying to get across.

So what?

I started this post by saying I learned something. I learned two things:

  • There's more than "finding unknown unknowns" to what I do as a tester. I model the system and communicate with the developers (even spying through version control) to adapt and redesign all my tests every time I test to provide useful results quickly. With this, we are able to have a team without great agile technical practices and still safely deliver continuously (daily) to production. 
  • While what I do is really just skilled testing, majority of people have a different idea. Giving what I do a different name helps see the difference from what they've come to know as "testers" work.