Tuesday, December 31, 2019

2019 and Me

Every year I look back and see how things turned out. While I'm one of those people who collects numbers, the relevant insights are hardly ever quantitative in nature.

At work, I looked at how our team had evolved based on the tracks we left in the world and learned that we've grown more used to uncertainty, incremental plans and delivering great with continuous flow. I learned about ways people had grown, and exceeded their past selves.

My past self sets me a standard that I work against. Not the other people. Past me. And how the current me is learning to be more of me, more intentional and accidental, and free to make my choices. I don't want to live my life on other people's defaults, I want to tweak my own settings and explore where they take me.

I tried many different settings in 2019:
  • I allowed myself to be *less responsible* and let things fall forward when I was low on energy. I learned I have people close to me who catch things, take balls forward and don't blame me for being a human. 
  • I tried not blogging for six months. It was hard to stop a routine but also felt liberating to confirm why I blog when I do. I have not written for an audience in general, but just allowed people to see what I write for myself.
  • I tried blogging for audience, behind paywall. I did not enjoy it and came to the idea of blogging and making videos for audience in 2020. Can't wait to try that one. 
  • I said yes to all speaking that pays minimum of travel but applied for none. Turned out with 20 talks. 
  • I auto blocked 700 people on twitter to learn about enforcing boundaries and doing what I needed over other people's comfort. 
Some things I will remember this year from:
  1. FlowCon talk in France and my talk #400 - with standing ovation. 
  2. DDD EU keynote in the Netherlands and finding my crowd. 
  3. Making it to 100 most influential in ICT in Finland list and having a 4-page article of me in ITViikko magazine
  4. Talking to 150 people for Calls of Collaboration to choose speakers for my own conference (European Testing Conference) but also others with TechVoices (Agile Testing Days USA, Selenium Conference London keynote)
I have some numbers too:
  • 45 blog posts with half-a-year break from blogging (2018: 110), but on split to 3 platforms
  • 20 talks, out of which 6 keynotes
  • +2 countries I have spoken at, totaling 26 now
  • 8 graduated TechVoices mentors (I helped them become speakers!)
  • 2 conferences organized
  • 2 Exploratory Testing Peer Conferences organized for #35YearsOfExploratoryTesting
  • 50 flights, sitting in planes for 165 hrs to fly total of 107 878 km (2018: 120 hrs)
  • 5662 twitter followers - after blocking 700 
  • +58 556 page views in the year totaling to 631 004 page views all time to my blog (2018: +81820, 582 448)
While all of the above are on my "on the side of work" achievements, work is where I go to learn.

I learned about business value, and how to discuss it a little better at office, creating a business value learning game. 

I learned about making space for people to discover what they are capable of doing, and not pointing out when they contradict their past selves before they are ready to see it. 

I learned that manager role is exactly like my tester role except for three things: 1) having to click "approve" as manager comes with the role 2) feeling equal with the most intimating, wonderful and special developers I did not realize I wasn't feeling equal with even though I already was 3) *lack of performance* management is hardest job I have ever done.

I learned I am a master of procrastination as I can turn ideas into code without writing code myself and I want to overcome my internal excuses of not just doing it. 

I was there to witness us moving to great and improving results and might have had something to do with some of it. 

Turning the "impossible" to possible should happen any moment now, when my consistent push for 3 years turns into continuous deployment for our product type.  

2019 was great. 2020 just needs to be different. 

Happy new year y'all. 




Thursday, December 5, 2019

A New Style for Conference Speaker Intake: Call for Collaboration

Drawing from a personal experience as conference speaker and conference organizer wanting to see change in how conference speakers are selected, I have been experimenting with something completely different. 

The usual way for conferences to find their speakers are casting two nets:
  • Invite people you know
  • Invite everyone to submit to call for proposals/papers (CfP) and select based on the written submission
Inviting works with people with name and fame. If you want to find new voices with brilliant stories from the trenches, the likelihood of you now knowing all  those people (yet) is quite high. Asking them to announce themselves makes sense.

This way of how a speaker announces their existence to conference is where I have discovered a completely new way of dealing with submissions creates a difference.

What is a Call for Proposals/Papers

In the usual world of announcing you might be interested in speaking in a conference, you respond to a Call for Proposals/Papers. The Papers version is what you would expect in more academically oriented conferences, and the paper they mean is usually an 8-page document explaining result of years of research. The Proposals version is what you would expect in a more industry-oriented conference, and the proposal is a title, 200-words abstract and 200-words bio of yourself, and whatever other information a particular conference feels they want to see you write that would help them make selections.

While speaking in public is about getting in front of a crowd to share, conference CfPs are about writing. The way I think of it is that writing is a gate-keeping mechanism to speaking in conferences.

As a new speaker, learning to write in this particular style to be accepted may be harder than getting on that stage and delivering your lessons by speaking about them. At the very least, it is different set of skills.

In my experiences in working to increase new voices and diversity at conferences, there are two things that most get in the way:
  1. Finances - underrepresented groups find it harder to finance their travel if conference does not address that
  2. Writing to the audience - unrehearsed people don't write great texts of their great talk ideas. Many feel the writing to be a task so overwhelming they don't submit. 
Conferences try to help people in multiple ways, usually seeking writing based ways. It is fairly common to expect a conference to provide some feedback on your written text, especially when using supportive submission systems where you then improve your text based on feedback. But the edits are usually minor even when you could frame your talk different to make it better presented. Many ask for speaking samples (videos), adding to the work expected on the competition towards a conference speaking slot. Some conferences shortlist proposals and then call people, to ensure the speaking matches the writing. Some conferences realize after selection you could use help and call mentors like myself to help bring out the better delivery of an already great idea.

What is a Call for Collaboration

Call for Collaboration is a submission process I have been discovering for the last five years, coming to terms with my discomfort on choosing a speaker based on writing instead of speaking. I have felt I don't appreciate the purely competitive approach of writing for a CfP to win a speaking slot, and wanted to find something different.

Call for Collaboration is about aspiring speakers announcing their existence and collaborating on creating that proposal. It's a process where the conference representatives invest online face to face time to getting to know great people they could invite. And it's a process where the investment from the speaker side is smaller, creating less waste in case of not fitting the scarce conference slots. It's a human-human process where people speak and instead of assessing we build the best possible proposal from whatever the aspiring speaker comes in with.
In Call for Collaboration (CfC), we appreciate that every voice and story belongs on a stage, and making the story the best form of itself increases it chances to this conference, but has a ripple effect of improving it for other conferences too.

This submission process was first created for European Testing Conference, and later used for TechVoices track for Agile Testing Days USA 2018 and 2019, and TechVoices keynote for Selenium Conference London. So far I have done about 500 15-minute calls over the years of discovering this.

How Does This Work?

It all starts with an aspiring speaker thinking they want to make their existence and idea known for a particular Call for Collaboration a conference kicks off and being willing to invest 15 minutes of their life to have a discussion about a talk idea they have.

 Image. TechVoices version of CfC + Activity Mentoring

Schedule a Call

To announce their existence, they get a link created with Calendly that shows 15-minute timeslots available to schedule.

Behind the scenes, a conference representative has connected Calendly to their calendar knowing when they are not available and defined time frames when they accept calls and limits to numbers of calls per day. They can define questions they want answered, and I usually go for minimum:
  • Your talk's working title
  • Optional abstract if you want to pass us one already
  • Your pronouns
Each call from a conference representative perspective is 15 minutes, like a coffee break. It includes taking an online call to someone anywhere in the world and meeting someone awesome.

If the aspiring speaker has something come up, they can reschedule with Calendly. Calendly also handles timezones so that both parties end up expecting the same time - at least if you have the tool create a calendar appointment for you.

Show Up for the Call

The 15-minute call is for collaboration. It starts with establishing we don't need to discuss credentials but just the talk idea and that everyone is awesome. We are not here to drop people away from the conference, but to understand the world of options and make this particular option shine, together.

It continues with the aspiring speaker telling how they see their talk: what is it about, what they teach and what the audience would get from that.

The usual questions to ask are on "Would you have an example of this?" and "What is your current idea of how you would illustrate this?" or on "Have you considered who is your audience?" or "Why should people care about this?" or "We know you should be talking of this, but how would you tell that to people who don't know it yet and have many options in similar topics?".

I have had people come to the call with the whole story of their life in agile, and leave with one concrete idea of what they are uniquely able to teach. There are talks in this world that exist because they were discovered through these 15-minute discussions.

For some of the calls, we've had a whole group of people from the conference - this serves as a great way of teaching the mechanism further - mentoring the mentors to take the right mindset. We are there to build up the speaker and idea, not to test it for possible problems.

Share to the World

In the end of the call, the conference representative asks for permission to summarize what they  learned about the talk in a tweet with reference to the aspiring speaker, and with permission share that. Sharing serves three purposes: it helps remember what the talk was about (to prioritize for invitation to work on the talk further); it allows the aspiring speaker to confirm if their core message was heard and to correct; and it creates a connection for the aspiring speaker to other people interested of this theme in the community.

Prioritize to Invite

Now we are at a point where there isn't really an abstract, but there is a tweet and there are the lessons of what the abstract could be about from the aspiring speaker to the conference representative. We can make a selection based on how people speak in that call, and particularly, what unique content they would bring to that conference.

If  someone isn't quite there yet on how they deliver their message, we can invite them and ask them to pair up with an activity mentor for rehearsing the talk. This is the only way to  get some of the unique new experiences from people who are not accustomed to speak in public. With rehearsing, people can do it. The only concern around rehearsing I have sometimes is on English skills - I have mentored people who would either need a translator (used a translator in our call) or a few more years of spoken English.

This is a point where you have usually used about same effort on the person as you would if you were carefully reading their written abstract - but you might have now a different talk to consider as a result of the collaboration. 

If you invite, the next step is needing the abstract for the conference program. Or, it could be that this is the abstract use for yet another round of selections if you want to pin this process on a more traditional CfP.

Activity Mentoring for Conference Proposal - What Is This?

The activity of creating that title, abstract and bio to show the best side of the talk is the next part. The newer the speaker, the harder this is to get right without help. A natural continuation of CfC is activity mentoring, ensuring the written test as a deliverable of the process reflects the greatness of the talk.
  • 1st draft  is what comes out of the aspiring speaker without particularly trying to optimize for correctness. It is good to set up expectations, but also encourage: something is better than nothing. This is just a start. 
  • 2nd draft is what comes our when the conference representative from the call puts together 1st draft, their notes on what the talk is about, the tweet they summarized things into and their expertise on abstracts. Its usually an exercise of copypasting together my notes of their spoken words and their written words in an enhanced format. 
  • Submission is what the conference system sees, and it is an improved version of 2nd draft. 
Example

The latest effort from this process will be soon published as the TechVoices Track of Agile Testing Days USA 2020. 9 speakers with stories from new speakers that are not available without taking the effort to get to know the people. Many of these voices would not be available if the choice was done on what they wrote alone. Every single one of these voices is something I look forward to, and they teach us unique perspectives.

I still have another activity  mentoring period ahead of me with a chance of hearing the premiers of these talks before conference and helping them shine with yet another round of feedback

We chose 9 talks out of 45 proposals invited from USA, South America and Canada. I also had a few calls with ideas from people not from invited geographies and helped them figure out their talks in the same 15 minute slots.

As a mentor, I had time to talk to all of these people and feel privileged to having had the chance of hearing their stories and tell about their existence through tweets. I would not have had time to help all of them get the proposals to a shape where they could get accepted to the conference. The activity mentoring is a focus draining activity, whereas having a call is more easily time-boxed and happens without special efforts.

I spent 11 hours over timeframe of a month talking to people and getting to know them. I spent another 10 hours on the 9 people that were selected.

The hours the 15-minute slots make have been the best possible testing awareness training I could have personally received. It has given me a lot of perspective, and made me someone who can drop names and topics for conferences that have a more traditional Call for Proposals.


Tuesday, December 3, 2019

The First Test On a New Feature

Testing various features, I have just one rule:
Never be bored.
Sometimes I try to figure out a template to what I do when I test and how I end up doing what I do, and it always boils down to this. Do something that keeps you engaged.

Today I was testing a feature about scheduling tasks of all sorts, and as I was thinking about getting started, I told myself I should test the basic simple positive scenario first. Not that I thought that wouldn't work, but to show myself how the feature could work. I've used that rule a lot, but today that rule did not make me feel excited.

Instead, I started off with a list of examples that was provided. I quickly glanced the list through, and selected one that looked complicated telling myself that if something wouldn't work, that would probably be it.

It turns out I was right. Instead of seeing the feature work, I got to see a fairly non-explanatory error message on the log I was monitoring.  So I run a second test, the simplest sample from the list to see how it could work. From starting testing to discussing a bug with the developer was just minutes away.

Meanwhile, I was having a discussion with another developer to refresh my ideas of test strategy: would we test these types of things in long term reliably with unit tests, yes - getting to a confirmation that while fixing what I complained about, the first developer had already improved the unit tests on it. 

Similarly, the fix for the bug was also just minutes away, and I needed to chip in some more ideas.

I ended up asking a lot of questions. Questions around how things would work on leap years and days, impossible combinations of months and days and instead of ending up testing these, the developer volunteered. Questions around how time could be too short or too long for our taste. Questions around what the log would show. Discussions changed my perspectives, and clarified our shared intent around what we were building.

I ended my day of testing with googling evil cron for testing, and had lots of fun with online tools generating me test data I could try. Things that were not supposed to work did and things that really stretched what was supposed to be valid were confirmed. I explored time calculation, passing valid and invalid values. And as always with testing, had a great time.

These are first tests, and I'm not done. Instead of testing this all by myself, I invited a group of people to do mob testing with me on this, to test my own thinking and improve our test strategy around what makes sense to automate even further.

There is no first absolute test. It is never too late to do the thing you thought you should have done first. With testing, very few options expire. You just need to keep track of your options.


Sunday, December 1, 2019

Sequences of time and cognition

Back in the days when we were starting to figure out what the essential difference between exploratory testing and the other testing was, we figure out that we would call the other scripted testing.

We recognized separations of sequence in:

  • time - activities separated by time
  • cognition - activities separated by skill focus
For exploratory testing to take place, the design and execution activities needed to be intertwined so that both time and cognition could not allow for separation. The reasons were clear:
  • designing and planning what to do early when you know the least makes little sense has a high risk of wasteful activity
  • designing and planning by a "more skilled" person to be executed by "less skilled" person assumes knowledge can be transferred with a document instead of seeing it as something acquired through effort. 

With DevOps and Continuous Delivery, the time separation has transformed. We still see some of it in the sprint-based ways of working where we start with BDD-style scenarios very much separated in time even if only by days or hours. That part is scripted testing, not exploratory testing.

The cognitive separation has also transformed. For exploratory testing to be great, the cognitive sequence of today includes using code as a way of executing things and documenting things, and this can only be included if the exploratory tester knows programming at least to a level of effectively pair with others in turning ideas into code. The scripted variants separate the cognitive sequence into the person who designs the tests, the person who automates the tests and monitors them, and the person who tests stuff that automation may not cover, perhaps in what we used to know as exploratory testing way. 

Thinking through this, I have come to the idea of cognitive bridges. In close collaboration with teams, the cognition around the testing activity even when it is split for multiple people can either be isolated (not exploratory) or bridged (exploratory). 

We used to have cognitive bridges we organized for larger exploratory testing efforts through time-boxing and debriefing with other testers. Now have cognitive bridges in the whole team, and with activities beyond just plain testing. The one that fascinates me the most right now is how the tone of discussing developer intent builds a cognitive bridge.