Saturday, December 30, 2017

Finding a Bug I wasn't looking for

Some years ago, I was working as a test manager on a project where I was considered "too valuable to test". So I tested in secret, to be able to guide the people that were not as "valuable" as I was, to make them truly valuable.

We were an acceptance testing group on customer side, and the thing we were building would take years and years before there was all the end to end layers to use it like an end user would. A lot of the testing got postponed as there was no GUI - leaving us thin on resourcing early on. There were multiple contractors building the different layers, and a lot of self-inflicted complexity.

The core of the system was a calculation engine, a set of web services sitting somewhere. With the little effort and weird constraints on my time, I still managed to set up up to test something while it had no UI.

We used SoapUI to send and receive messages. The freebie version back then did not have a nice "fill in the blanks" approach like the pro one, and it scared the hell out of some of my then colleagues. So we practiced in layers, putting values in excel sheets and then filling the values back into the messages. As my  group learned to recognize that amongst all the extra technical cruft was values they cared deeply for and concepts they could understand better than any of the developers, we moved to working with the raw messages.

In particular, I remember one of my early days of trying to figure out the system of thousands of pages of specification with using it. I could figure out that there were three fields that needed filling as compulsory, the other stuff was all sorts of overrides. So I took a listing of a set of those three things in some thousands, and parametrized to send thousands of messages, saving responses on my hard drive.

I did not really know what I would be looking for, but I was curious of the output. I opened one in Notepad++, and skimmed through thousands of lines of response. There was no way I would know if this was right or wrong. I got caught up seeing error codes, and made little post-it notes categorizing what I was seeing. I repeated this with another message, and felt desperate. So out of a whim, I opened all the messages I had and started searching codes I had on my notes across all the messages.

The first code I was searching for was something that I conceptually understood that it shouldn't be that common. Yet, 90 % of the messages I had included that code. I checked with a business expert, and indeed my layman understanding was correct - the system was broken in a significant way if this code was this common. It meant lots of manual work for a system that was intended to automate decisions in the millions.

By playing around to understand when told not to, I found a bug I wasn't looking for. But one that was a no go in a system like that.

My greatest regret is that I spent time in the management layers, fighting in their terms. With the skills I have as a tester, I would have won out the fight for my organization if I just tested. Even when told not to. I was too valuable not to test.

This experience made sure I would again find places to work that did not consider the most expensive tester someone who wasn't allowed to test. And I've been in right kind of organizations, making a difference ever since. 

Sunday, December 17, 2017

Kaizen on Test Strategies

I just saw a colleague changing jobs and starting to talk on test strategies. As I followed their writings, my own experience started to highlight. I realized I am no longer working on visible test strategies - the last one I created was to start my second to last job and it did not prove that valuable.

When I say test strategy, I mean the ideas that guide our testing. Making those visible. Assessing risk and choosing our test approaches appropriately.

In the past, making a strategy was a distinguishable effort. It usually resulted in either a document or a set of slides. It guided not only my work, but supposedly the whole project. It was the guideline that helped everyone make choices towards the same goals.

Thinking of the strategy and specifics of a particular project was distinguishable effort while I was still doing projects. With agile and continuous delivery, there is no project, just flow of value in a frame of improving excellence. When I joined new organizations that had no projects, my introduction to coming to "improve / lead the testing efforts" triggered me to the strategy considerations. So what is different with my most recent effort, other than the lazy explanation of me not being diligent enough?

I approach my current efforts with the idea that they have been successful before me, and they will remain successful with me. I no longer need to start with the assumption that everything is wrong and needs to be set right. Even if it was wrong, I assume people can't change fast without pain, so I approach it with a Kaizen attitude - small continuous improvement over time, nudging a little here and there and looking at where we are and where I would like us to find our way.

Nowadays, a selection of visions of what good testing looks like resides in my head. I talk about that, with titles like "modern testing", "modern agile" and "examples of what awesome looks like". I don't talk about it to align others to it, I talk to allow people visibility to my inner world, for me to learn on what they are ready to accept and what not.

All the work with testing strategy looks very tactical. Asking people to focus here or there. Having a mob testing session to reveal types of information we miss now in general. Showing skills, tools. Driving forward the respect of an exploratory tester but also the patient building of test automation system that does better as per what I understand better to be.

Looking back, I remember (and can show you) many of the test strategy documents I've created. None of them has been as effective as the way I lead testing, with Kaizen in mind, for the last five years. 

Saturday, December 16, 2017

But women did not submit

Sometimes some women have energy to go and mention in twitter when they see conferences with all male keynotes or all male lineup. Most of the time, we notice but choose not to go into the attacks that result from pointing it out.

I'm feeling selectively energetic, and thus I'm not addressing directly the particular conference that triggered me in writing, but the underlying issue of how the conferences tend to respond.

The most common response is: they tried, but women did not submit.

I don't think they tried enough.
I believe we should, in conferences, model the world as we want it. We should have half women half men. And with a lineups usually going up to 100 people, it is not hard to find 50 awesome women a year to speak on relevant topics. The audience would get amazing experience and learning. The one threatened by this proposal is the 40 men that in current setup get to speak, and with my proposal have to queue in to another event.

Instead of calling for proposals and choosing thinking of equality, perhaps we should be choosing based on equity. And if we did, perhaps we did not have to "fake it" long before we "make it.


The pool of awesome women speakers, in my experience, grows when potential women participating in conferences see people they can relate to and they feel they can do it too. We've done a lot in this respect, with SpeakEasy adding support after initial spark, in the last few years and it shows on some conferences. 

Here's a thought experiment I played through for the time when women are still not equally available. Let's assume I want to speak and I can invest 100 work units into speaking. I can invest this time in different ways:

Respond to CFPs trying to vary contents.
Each CFP process takes 10 units of work, and each talk takes 20 units of work. Varying the contents so that my talk would fit an invisible hole in  the program is a lot of work: if my talk is on How use of Amazon Lambdas Changes Testing, there could be 10 others with the same fashionable topic. If my talk is on Security Testing, maybe this is too much of a niche to be given space for at this conference. If my talk is on the hands-on experiences in testing machine learning systems, maybe the keynote speaker already fills the slot for discussing machine learning. 

So let's assume I approach this as equal player in the field, and I want to get to the conferences. I want my voice out there. It's embarrassing to have to say no when you get accepted, so I might choose to play my chances so that I could have a chance for two (saving 2*20 units of work) and thus I get to submit to 6 conferences.

Wait to be invited
If someone else carries the load of 10 work units and finds you, invites you and negotiates on exactly the topic that would fit the program, you save a lot of work. The 100 units allow for 5 talks instead of 2, making this person more available in conferences. This is the way to create equity while we need it. 

So, when conferences say that women did not submit, they're actually saying:
  • the women who submitted did not choose to bet their time on us but went elsewhere
  • we did not do enough to get women to be considered for the program
  • we believe in treating everyone the same (equality), regardless if it being an approach that enforces status quo
Having good proportion of women is good business too. The contents are more representative, and speak to a wider audience. 

And, on towards intersectionality. It's easy to count this on binary gender, but that is not the diversity we look for. We want to see diversity of ethnicity, the whole spectrum of gender and whatever minorities we are not getting the changes to learn from. 






Long Delay in Feedback

A year ago, we created a new feature that I tested diligently, loving every moment of it. Yesterday, almost a year later, we received feedback that the "works as designed" isn't quite good enough for the purposes of a type of customer. I looked at the email, frustrated as it outlined concerns I had raised a year ago without reaction. When the response email from a developer mentioned "we had not thought of this scenario", I bit my lip not to correct. Correcting isn't helpful.

I would love to speak in specifics, but I can't. No one is telling me what to write and what not, but I sense things myself. When creating features around security, the less people know of the inherent designs the safer our clients are. But I can speak on concepts. And the concept of what I tested and what information got dismissed delivered by me but accepted by the customers is relevant. 

We work in an agile/devops fashion. The feature got built increment by increment, and included numerous releases before it was officially out there. Each increment, we would talk in the team of the thing we would be adding. It was always natural and fluent to test everything that got mentioned as functionalities to add. It was also evident to test error cases with the functionalities we added. Equally, it was evident to test those functionalities for security, performance, reliability and ease of use. The feature was built on a windows service, and testing the integrated system was evident. 

What was not evident is the testing of other similar features integrating with the same windows service. Well, it was something I did. It was something I reported on. It was something where we agreed that we'd change things if customers felt the way I did. 

For well over six months in heavy use, we did not hear back. Until now. 

I could take this as "it's just time for adding that functionality", in incremental fashion. It's not like anyone was relaxing and slacking off in the meanwhile. Or, I could take it as yet another consideration of what goes wrong with providing and accepting feedback. 

When we build incrementally, I find we are not often concerned on things beyond the immediate agreed scope. It takes quite a skill in seeing connections in the product ownership to see the whole, when development teams tend to focus on the slice they're pointed at. 

The long delay in feedback brings things back as surprises to current plans. There was the feedback with a short delay that got dismissed. But all in all, reacting to this feedback right now with a short cycle to delivery we've built makes the customer happy regardless. In a world where they are still used to wait 6 months or more for a resolution, delivering fast makes them feel happier than having it right the first time. 

Thursday, December 7, 2017

Becoming a Feedback Fairy

Late in the evening of a speakers' dinner at CraftConf 2017, I met a new person. He was a speaker, just like me, except that when he asked on what I would speak on, he used the words: "Explain it to me like I am not in this field, and I don't understand all the lingo".

I remember not having the words. But this little encounter with a man I can't even name made it into my talk the next day when I first time introduced myself in a new way:

"I'm Maaret and I'm a feedback fairy. It means that I use my magical powers (testing skills) to bring around the gift of deep and thoughtful feedback on the applications we build and the ways we build them. I do this on time for us to react, and with a smile on my face."

That little encounter coined something I had already been coming to from other ends. There were two other, prior events that had also their impact.

At DevOxx conference some time ago, I did a talk about Learning Programming. Someone in the audience gave me feedback, explaining that they liked my talk. The positive feedback as it was phrased made an impact, as they expressed that they'd ask me to be their godmother, unless that place was already up for grabs for J.K. Rowlings. As a dedicated Harry Potter fan, being next on anything from J.K. Rowlings is probably the nicest thing anyone can say.

As I received this feedback, I shared it with the women in testing group, and a new friend in the group picked it up. As I was doing my first ever open conference international keynote, she brought me a gift you can nowadays see in my twitter background: a fairy godmother doll, to remind me of my true powers.

For the Ministry of Testing Masterclass this week, I again introduced myself as a feedback fairy.
You can be a feedback fairy too, or whatever helps you communicate what you do. There's an upside on being a magical creature: I don't have to live to the rules set by the mortals. 

Friday, December 1, 2017

Sustainability and Causes of Conferences

Tonight is one of those where I think I've created a monster. I created #PayToSpeak discussion, or better framed, brought the discussion that was already out there outside our testing bubble inside it and gave it a hashtag.

The reason why I think it is a monster is that most people pitching into the conversation have very limited experience in the problem that it is a part of.

My bias prior to experience

Before I started organizing European Testing Conference, I was a conference speaker and a local (free) conference organizer. I believed strongly that the speakers make the conference.

I discounted two things as I had no experience of then:

  1. Value of organizer work (in particular marketing) in bringing in the people
  2. Conference longevity / sustainability
Both of these things mean that the conference organizers need to make revenue to pay expenses while the conference itself is not ongoing. 

Choices in different conferences

My favorite pick on #PayToSpeak Conferences is EuroSTAR, so let's take a more detailed look at them.
  • A big commercial organization, paying salary of a full team throughout the year
  • Building a community for marketing purposes (and to benefit us all while at it) is core activity invested in
  • Pays honorarium + travel to keynote speakers
  • Pays nothing for a common speaker, but gives an entry ticket to the conference
  • Is able to accept speakers without considering where they are from as all common speakers cost the same
  • Significant money for a participant to get into the conference, lots of sponsors seeking contacts with the participants
I suspect but don't really know that they might still have revenue of the conference after using some of the income on running the organization for a full year. But I don't really know. I know their choice is not to invest in the common speaker and believe it lowers the quality of talks they are able to provide. 

Another example to pick on would be Tampere Goes Agile - an Agile conference in Finland I used to organize. 
  • A virtual project organization within a non-profit, set up for running each year
  • No activity outside the conference except planning & preparation of the conference
  • Pays travel to all speakers, can't pay special honorarium to keynote speakers
  • Runs on sponsors money and stops when no one sponsors
  • Is not able to get big established speaker names, as they don't pay the speakers
  • Requires almost zero marketing effort, straightforward to organize
  • Free to attend to all participants
Bottom line


PayToSpeak is not about conferences trying to rip us speakers off when they ask us to cover our expenses. Conferences make different choices on the ticket price (availability to participants with amount of sponsor activities) and investment / risk allocations.

Deciding to pay the speakers is a huge financial risk if paying people don't show up.
Paying speakers travel conditionally (if people show up) does not work out.
Big name keynote speakers expect typically 5-15k of guaranteed compensation in addition to their travel expenses being covered.

Conferences decide where they put their money: participants (low ticket prices), speakers (higher ticket prices with arguably better quality content), keynote speakers (who wouldn't show up without the money) or organizers (real work that deserves to be paid or will not continue long).

#PayToSpeak speaks from a speakers perspective. We can make choices of being able to afford particular conferences due to speaker-friendly choices they make.

Options

If we understand that there are two problems #PayToSpeak mixes up, we may find ideas of how to improve the current state:

  1. Commonly appearing (but not famous) speakers need not to Pay to Speak to afford speaking.
  2. New voices with low financial possibilities need not to Pay to Speak to afford speaking. 
If some conference does relevant work for 2, as a representative of 1 I would consider paying to speak. But I would have to choose like one per year, because that is not out of my company's pocket, but my own. 

If some conference collects money for a cause in a transparent way, I again would consider paying to speak, capping the number I can do in a year. 

There are options to removing Pay to Speak:
  • Seek local speakers (build a local community that grows awesome speakers), and paying the expenses is not a blocker as the costs are small
  • Commit to paying speaker expenses, but actively invite companies they work for to pay if possible to support your cause. See what that does. 
  • Set one track to experiment with paying expenses and compare submission to that track to others, with e.g. attendee numbers and scores. 
  • Say you pay travel costs on request, and collect the info of who requests it with call for proposals
  • Team up with some non-profit on this cause and give them money for scholarships for some speakers. 
You can probably think some more. 

Conferences, none of them are inherently evil. Some of them are out of my reach as they are #PayToSpeak. And I'm not a consultant, nor work for a company that finds testers their marketing group. If I have to #PayToSpeak, I can't. I will remain local, and online. 

There's people like me, better than me, who have not started off by paying their dues of getting a little bit of name in some #PayToSpeak conference. I want to promote them the options of not having to #PayToSpeak. 




Why defining a conference talk level means nothing

Some weeks back, unlike my usual commitment to follow my immediate energy, I made a blogging commitment:
The commitment almost slipped, only to be rescued today by Fiona Charles saying the exact same thing. So now I just get to follow my energy on saying what I feel needs to be said.

Background
 
As you submit a talk proposal, there's all these fields to fill in. One of the fields is one that asks the level of the talk. The level then appears later as a color coding on the program, suggesting to be the among three most important information people use to select sessions. The other important bits are the speaker name (which only matters if the speaker is famous) and the talk title. On how to deal with  talk titles, you might want to check out the advice in European Testing Conference blog.

The beginner/intermediate/advanced talk split comes in many forms. Nordic Testing Days in particular caught my eye with the "like fish in the sea", "tipping your toes" metaphoric approach, but it is still the same concept.

The problem

To believe concepts like beginner/intermediate/advanced talk levels are useful, you need to believe that we compare people in a topic like this.
This same belief system is what we often need to talk about when we talk about imposter syndrome - we think knowledge and skill is linearly comparable, when it actually isn't.

The solution

We need to think of knowledge and skills we teach differently - as a multi-dimensional field.

Every expert has more to learn and every novice has things to teach. When we understand that knowing things and applying things isn't linear, we get to appreciate that every single interaction can teach us things. It could encourage the "juniors" to be less apologetic. It could encourage the "intermediate" to feel like they are already sufficient at something even if not everything. And it could fix the "experts" attitudes towards juniors where interaction is approached with preaching and debate, over dialog with the idea of the expert learning from the junior just as much as the other way around.

So, the conference sessions....

I believe the best conference sessions even on advanced topics are intended for basic audiences. This is because expertise isn't shared. We don't have a shared foundation. Two experts are not the same.

It's not about targeting to beginner / advanced, it's about building a talk on a relevant topic so that it speaks to a multi-dimensional audience.

As someone with 23 years of industry experience, even my basic talks have some depth others don't. And my advanced talks are very basic, as I need to drag entire audiences to complex ideas like never writing a bug report again in their tester career.

We need more good talks that are digestible for varied audiences, less of random labeling for the level of the talk. In other words, all great talks are for beginners. We're all beginners in the others perspective.