Thursday, December 5, 2019

A New Style for Conference Speaker Intake: Call for Collaboration

Drawing from a personal experience as conference speaker and conference organizer wanting to see change in how conference speakers are selected, I have been experimenting with something completely different. 

The usual way for conferences to find their speakers are casting two nets:
  • Invite people you know
  • Invite everyone to submit to call for proposals/papers (CfP) and select based on the written submission
Inviting works with people with name and fame. If you want to find new voices with brilliant stories from the trenches, the likelihood of you now knowing all  those people (yet) is quite high. Asking them to announce themselves makes sense.

This way of how a speaker announces their existence to conference is where I have discovered a completely new way of dealing with submissions creates a difference.

What is a Call for Proposals/Papers

In the usual world of announcing you might be interested in speaking in a conference, you respond to a Call for Proposals/Papers. The Papers version is what you would expect in more academically oriented conferences, and the paper they mean is usually an 8-page document explaining result of years of research. The Proposals version is what you would expect in a more industry-oriented conference, and the proposal is a title, 200-words abstract and 200-words bio of yourself, and whatever other information a particular conference feels they want to see you write that would help them make selections.

While speaking in public is about getting in front of a crowd to share, conference CfPs are about writing. The way I think of it is that writing is a gate-keeping mechanism to speaking in conferences.

As a new speaker, learning to write in this particular style to be accepted may be harder than getting on that stage and delivering your lessons by speaking about them. At the very least, it is different set of skills.

In my experiences in working to increase new voices and diversity at conferences, there are two things that most get in the way:
  1. Finances - underrepresented groups find it harder to finance their travel if conference does not address that
  2. Writing to the audience - unrehearsed people don't write great texts of their great talk ideas. Many feel the writing to be a task so overwhelming they don't submit. 
Conferences try to help people in multiple ways, usually seeking writing based ways. It is fairly common to expect a conference to provide some feedback on your written text, especially when using supportive submission systems where you then improve your text based on feedback. But the edits are usually minor even when you could frame your talk different to make it better presented. Many ask for speaking samples (videos), adding to the work expected on the competition towards a conference speaking slot. Some conferences shortlist proposals and then call people, to ensure the speaking matches the writing. Some conferences realize after selection you could use help and call mentors like myself to help bring out the better delivery of an already great idea.

What is a Call for Collaboration

Call for Collaboration is a submission process I have been discovering for the last five years, coming to terms with my discomfort on choosing a speaker based on writing instead of speaking. I have felt I don't appreciate the purely competitive approach of writing for a CfP to win a speaking slot, and wanted to find something different.

Call for Collaboration is about aspiring speakers announcing their existence and collaborating on creating that proposal. It's a process where the conference representatives invest online face to face time to getting to know great people they could invite. And it's a process where the investment from the speaker side is smaller, creating less waste in case of not fitting the scarce conference slots. It's a human-human process where people speak and instead of assessing we build the best possible proposal from whatever the aspiring speaker comes in with.
In Call for Collaboration (CfC), we appreciate that every voice and story belongs on a stage, and making the story the best form of itself increases it chances to this conference, but has a ripple effect of improving it for other conferences too.

This submission process was first created for European Testing Conference, and later used for TechVoices track for Agile Testing Days USA 2018 and 2019, and TechVoices keynote for Selenium Conference London. So far I have done about 500 15-minute calls over the years of discovering this.

How Does This Work?

It all starts with an aspiring speaker thinking they want to make their existence and idea known for a particular Call for Collaboration a conference kicks off and being willing to invest 15 minutes of their life to have a discussion about a talk idea they have.

 Image. TechVoices version of CfC + Activity Mentoring

Schedule a Call

To announce their existence, they get a link created with Calendly that shows 15-minute timeslots available to schedule.

Behind the scenes, a conference representative has connected Calendly to their calendar knowing when they are not available and defined time frames when they accept calls and limits to numbers of calls per day. They can define questions they want answered, and I usually go for minimum:
  • Your talk's working title
  • Optional abstract if you want to pass us one already
  • Your pronouns
Each call from a conference representative perspective is 15 minutes, like a coffee break. It includes taking an online call to someone anywhere in the world and meeting someone awesome.

If the aspiring speaker has something come up, they can reschedule with Calendly. Calendly also handles timezones so that both parties end up expecting the same time - at least if you have the tool create a calendar appointment for you.

Show Up for the Call

The 15-minute call is for collaboration. It starts with establishing we don't need to discuss credentials but just the talk idea and that everyone is awesome. We are not here to drop people away from the conference, but to understand the world of options and make this particular option shine, together.

It continues with the aspiring speaker telling how they see their talk: what is it about, what they teach and what the audience would get from that.

The usual questions to ask are on "Would you have an example of this?" and "What is your current idea of how you would illustrate this?" or on "Have you considered who is your audience?" or "Why should people care about this?" or "We know you should be talking of this, but how would you tell that to people who don't know it yet and have many options in similar topics?".

I have had people come to the call with the whole story of their life in agile, and leave with one concrete idea of what they are uniquely able to teach. There are talks in this world that exist because they were discovered through these 15-minute discussions.

For some of the calls, we've had a whole group of people from the conference - this serves as a great way of teaching the mechanism further - mentoring the mentors to take the right mindset. We are there to build up the speaker and idea, not to test it for possible problems.

Share to the World

In the end of the call, the conference representative asks for permission to summarize what they  learned about the talk in a tweet with reference to the aspiring speaker, and with permission share that. Sharing serves three purposes: it helps remember what the talk was about (to prioritize for invitation to work on the talk further); it allows the aspiring speaker to confirm if their core message was heard and to correct; and it creates a connection for the aspiring speaker to other people interested of this theme in the community.

Prioritize to Invite

Now we are at a point where there isn't really an abstract, but there is a tweet and there are the lessons of what the abstract could be about from the aspiring speaker to the conference representative. We can make a selection based on how people speak in that call, and particularly, what unique content they would bring to that conference.

If  someone isn't quite there yet on how they deliver their message, we can invite them and ask them to pair up with an activity mentor for rehearsing the talk. This is the only way to  get some of the unique new experiences from people who are not accustomed to speak in public. With rehearsing, people can do it. The only concern around rehearsing I have sometimes is on English skills - I have mentored people who would either need a translator (used a translator in our call) or a few more years of spoken English.

This is a point where you have usually used about same effort on the person as you would if you were carefully reading their written abstract - but you might have now a different talk to consider as a result of the collaboration. 

If you invite, the next step is needing the abstract for the conference program. Or, it could be that this is the abstract use for yet another round of selections if you want to pin this process on a more traditional CfP.

Activity Mentoring for Conference Proposal - What Is This?

The activity of creating that title, abstract and bio to show the best side of the talk is the next part. The newer the speaker, the harder this is to get right without help. A natural continuation of CfC is activity mentoring, ensuring the written test as a deliverable of the process reflects the greatness of the talk.
  • 1st draft  is what comes out of the aspiring speaker without particularly trying to optimize for correctness. It is good to set up expectations, but also encourage: something is better than nothing. This is just a start. 
  • 2nd draft is what comes our when the conference representative from the call puts together 1st draft, their notes on what the talk is about, the tweet they summarized things into and their expertise on abstracts. Its usually an exercise of copypasting together my notes of their spoken words and their written words in an enhanced format. 
  • Submission is what the conference system sees, and it is an improved version of 2nd draft. 
Example

The latest effort from this process will be soon published as the TechVoices Track of Agile Testing Days USA 2020. 9 speakers with stories from new speakers that are not available without taking the effort to get to know the people. Many of these voices would not be available if the choice was done on what they wrote alone. Every single one of these voices is something I look forward to, and they teach us unique perspectives.

I still have another activity  mentoring period ahead of me with a chance of hearing the premiers of these talks before conference and helping them shine with yet another round of feedback

We chose 9 talks out of 45 proposals invited from USA, South America and Canada. I also had a few calls with ideas from people not from invited geographies and helped them figure out their talks in the same 15 minute slots.

As a mentor, I had time to talk to all of these people and feel privileged to having had the chance of hearing their stories and tell about their existence through tweets. I would not have had time to help all of them get the proposals to a shape where they could get accepted to the conference. The activity mentoring is a focus draining activity, whereas having a call is more easily time-boxed and happens without special efforts.

I spent 11 hours over timeframe of a month talking to people and getting to know them. I spent another 10 hours on the 9 people that were selected.

The hours the 15-minute slots make have been the best possible testing awareness training I could have personally received. It has given me a lot of perspective, and made me someone who can drop names and topics for conferences that have a more traditional Call for Proposals.


Tuesday, December 3, 2019

The First Test On a New Feature

Testing various features, I have just one rule:
Never be bored.
Sometimes I try to figure out a template to what I do when I test and how I end up doing what I do, and it always boils down to this. Do something that keeps you engaged.

Today I was testing a feature about scheduling tasks of all sorts, and as I was thinking about getting started, I told myself I should test the basic simple positive scenario first. Not that I thought that wouldn't work, but to show myself how the feature could work. I've used that rule a lot, but today that rule did not make me feel excited.

Instead, I started off with a list of examples that was provided. I quickly glanced the list through, and selected one that looked complicated telling myself that if something wouldn't work, that would probably be it.

It turns out I was right. Instead of seeing the feature work, I got to see a fairly non-explanatory error message on the log I was monitoring.  So I run a second test, the simplest sample from the list to see how it could work. From starting testing to discussing a bug with the developer was just minutes away.

Meanwhile, I was having a discussion with another developer to refresh my ideas of test strategy: would we test these types of things in long term reliably with unit tests, yes - getting to a confirmation that while fixing what I complained about, the first developer had already improved the unit tests on it. 

Similarly, the fix for the bug was also just minutes away, and I needed to chip in some more ideas.

I ended up asking a lot of questions. Questions around how things would work on leap years and days, impossible combinations of months and days and instead of ending up testing these, the developer volunteered. Questions around how time could be too short or too long for our taste. Questions around what the log would show. Discussions changed my perspectives, and clarified our shared intent around what we were building.

I ended my day of testing with googling evil cron for testing, and had lots of fun with online tools generating me test data I could try. Things that were not supposed to work did and things that really stretched what was supposed to be valid were confirmed. I explored time calculation, passing valid and invalid values. And as always with testing, had a great time.

These are first tests, and I'm not done. Instead of testing this all by myself, I invited a group of people to do mob testing with me on this, to test my own thinking and improve our test strategy around what makes sense to automate even further.

There is no first absolute test. It is never too late to do the thing you thought you should have done first. With testing, very few options expire. You just need to keep track of your options.


Sunday, December 1, 2019

Sequences of time and cognition

Back in the days when we were starting to figure out what the essential difference between exploratory testing and the other testing was, we figure out that we would call the other scripted testing.

We recognized separations of sequence in:

  • time - activities separated by time
  • cognition - activities separated by skill focus
For exploratory testing to take place, the design and execution activities needed to be intertwined so that both time and cognition could not allow for separation. The reasons were clear:
  • designing and planning what to do early when you know the least makes little sense has a high risk of wasteful activity
  • designing and planning by a "more skilled" person to be executed by "less skilled" person assumes knowledge can be transferred with a document instead of seeing it as something acquired through effort. 

With DevOps and Continuous Delivery, the time separation has transformed. We still see some of it in the sprint-based ways of working where we start with BDD-style scenarios very much separated in time even if only by days or hours. That part is scripted testing, not exploratory testing.

The cognitive separation has also transformed. For exploratory testing to be great, the cognitive sequence of today includes using code as a way of executing things and documenting things, and this can only be included if the exploratory tester knows programming at least to a level of effectively pair with others in turning ideas into code. The scripted variants separate the cognitive sequence into the person who designs the tests, the person who automates the tests and monitors them, and the person who tests stuff that automation may not cover, perhaps in what we used to know as exploratory testing way. 

Thinking through this, I have come to the idea of cognitive bridges. In close collaboration with teams, the cognition around the testing activity even when it is split for multiple people can either be isolated (not exploratory) or bridged (exploratory). 

We used to have cognitive bridges we organized for larger exploratory testing efforts through time-boxing and debriefing with other testers. Now have cognitive bridges in the whole team, and with activities beyond just plain testing. The one that fascinates me the most right now is how the tone of discussing developer intent builds a cognitive bridge. 

Friday, November 22, 2019

35 Years of Exploratory Testing

Exploratory Testing is 35 years this year. I've celebrated this by organizing two peer conferences to discuss what it is today and my summary is that it is:

  • more confused than ever
  • different to different people
  • still relevant and important
Cem Kaner described it 12 years ago this way: 


The core of what I pick up from this is emphasis on individual tester without separation of cognitive sequence, optimizing value through opportunity cost awareness and learning supporting test design and execution throughout the work being done.  

I dropped words like project (because in my world of agile continuous delivery, they don't exist), and added words like opportunity cost to emphasize the continuous choice of time on something being time away from something else. I also brought in the concept of cognitive load separation as the idea is to not separate work into roles but build skills and knowledge in the people through doing the work. 

Exploratory Testing after 23 years

Cem's notes are available in the presentation, and I wanted to summarize them here. He identified four areas to describe from his circles:

Areas of agreement
  • Definitions
  • Everyone does it to a degree
  • Approach, not a technique
  • Antithesis to scripting
Areas of controversy
  • Not quicktesting - packaged recipes around particular theory of error; requires domain and application knowledge to do well
  • Not only functional testing - quality beyond functional is of concern to exploring
  • Uses tools - test automation is a tool, but there could also be tools specifically in support of exploratory testing
  • Not only test execution - not a technique but an evolutionary approach to software testing. You can do all things testing in an exploratory and not-exploratory way. 
  • Complex tests requiring prep included - cycle of learning is not in the moment but varies
  • Certifications - don't understand this style of testing and can be worthless or anti-productive for the industry trying to do exploratory testing
Areas of progress
  • Understanding of quicktests - from works of Whittaker, Hendrickson and Bach
  • Oracle problem - thinking around oracles has evolved
  • Learning and cognition in the focus of ET - individual and paired work
  • Multiple guiding models - everyone with their own
Areas of ongoing concern

  • Modeling an area of early understanding - talk of it goes on
  • Myths - making way to understand it is cognitively challenging, skilled and multidisciplinary
  • Tracking and reporting status - dashboards and time-boxed approaches were the whim of the day
  • Individual tester performance - we don't know how to assess that
  • No standard test tool suite - tools guiding thinking in the smart way


Exploratory Testing after 35 years

Previous areas of agreement are not that anymore. 
Some folks have decided that Exploratory Testing is deprecated. 
Other folks are rediscovering Exploratory Testing as the smart way of testing that was relevant 35 years ago that is different today when automation is part of how we explore rather than a separate technique. 

We still agree that it is an approach not a technique, just not if it is necessary separation. Wasteful practices of testing both with and without automation are still popular, and it is a cost-aware approach to casting nets to identify quality-related information. 

Previous areas of controversy are old lessons and not particularly controversial. Some of them (like certifications) describe a divided industry that no longer gets the same attention. Most of what was controversial 12 years ago, is now part of defining the approach. 

Areas of progress seem far from done from today's perspective. Quicktests give less value with emergence of automation. Oracles have been a focus of Cem Kaner's teaching for a decade after the previous summary and are working to understand them in respect to automation being closely intertwined in our testing. Paired work of testing was a whim of a moment as focus of really understanding the cognitive side of it but in addition to paired work, we now have mob testing - work in a group. 

Concerns are still concerns except for tracking and reporting status. With automation integrated into exploratory testing and continuous delivery, status is not a problem but industry is further divided to those in the fast paced deliveries and those who are not. 

What do we agree on, what are the controversies, what progress can we hope we are still making and what concerns we have at 35 years of exploratory testing? 

Areas of agreement
  • All testing is exploratory to a degree
  • Exploratory testing is skilled, multidisciplinary and cognitively challenging and finds unknown unknowns
Areas of controversy
  • Is exploratory testing even necessary concept in the world of continuous development, when the forms of it happening can be labeled as "automation", "pairing", "production monitoring", "learning", and "test automation maintenance". 
  • The early experts cling to materials and lessons from early 2000 and fail to connect well with a wider community, creating a tester community isolated from the overall software communities
Areas of progress
  • New voices thinking and sharing from practice first, in agile development delivering continuously: Anne-Marie Charrett, Alex Schladebeck, Maaret Pyhäjärvi and many others are working actively to move the area further
  • Developers are doing exploratory testing, the split to this being tester specialty is giving way to working together and learning together
Areas of concern
  • Lack of shared learning and seeking common understanding. The field feels more like a competition of attribution than learning to do testing well in modern circumstances.
  • Trainings on exploratory testing are hard to select for organizations as they include very different things in similar looking box. A field as big as exploratory testing should have a whole series of trainings, not one that introduces the concept again and again. 
  • Testers, in particular those without coding skills, are dropped out of industry. We are losing people who believe they don't code without helping them see their contributions through collaboration skills. People dropping out are disproportionally women. The old adage of "coding not being trainable" to many testers is harmful. 



Testing Computer Software, 2nd ed

I'm collecting some of the history of Exploratory Testing and for that, going into selected references that brought the idea to me. The first book I read on testing (I'm a newbie, only 22 years in the industry :D ) was
Cem Kaner et al. Testing Computer Software, 2nd ed. published: 1999
For research purposes, I have had my hands on the 1st edition too, which is from 1988 but it's been a while.

For the second edition, I go for index and look for exploratory testing.

exploratory testing
   - generally 6, 7-11, 215
   - boundary conditions 7-11
   - discover the program's limits 215, 241

Page 6 is a chapter "The first cycle of testing" and Step 4: Do some testing "on the fly". Here the book speaks on running out of formally planned test cases. It emphasizes that regardless, you can keep testing. It encourages to think of new things without spending much time preparing or explaining the tests. It encourages trusting instincts and trying anything that feels promising.

It introduced the example in question as something where you switched from formal to informal very early on as the program crashed. And then it introduces the concept:
"Rather than gambling away the planning time, try some exploratory tests - what ever comes in mind."
It then pulls a core concept out:
"Always write down what you do and what happens when you run exploratory tests."
It moves to give examples about how surprising problems you did not anticipate for when planning your formal test series are now useless on figuring out what the problems with software are.

Page 7-11 goes into Step 5: Summarize what you know about the program and its problems. It introduces with a table "Further exploratory tests". It then discusses a possible way of implementing as a reason for problems testing is finding with explaining ascii character codes and linking boundaries to them. Finally, it describes a second cycle of testing after fixes and reminds there will be multiple cycles and that you want to select tests for each cycle based on what the programmer said she changed. The book emphasizes that it is not a mere act of selecting but you need to come up with new tests too.

Of total of 16 pages on the first sequence of testing, the book discusses exploratory tests for 5 pages at a time the rest of the world was still speaking only of scripted tests.

Page 215 talks about Initial development of test materials. It talks of parallel work on testing  and the test plan saying "you never let one get far ahead of the other". It moves to talk about the first prompts you should consider: testing against documentation, starting to create a function list and analyzing limits. It never mentions exploratory testing, but describes how a plan is created as per "our approach" which reads as exploratory.

Page 241 is on components of test planning documents and specifically three kinds of matrices: environment matrix, input combination matrix and error message and keyboard matrix. For input combination matrix, it elaborates:
"Our approach is more experiential. We learn a lot about input combinations as we test. We learn about natural combinations of these variables, and about variables that seem totally independent. We also go to the programmers with the list of variables and ask which ones are supposed to be totally independent."
The book was a collaboration between Cem Kaner, Jack Falk and Hung Quoc Nguyen and as such it does not outline if exploratory testing was coined by one or all of them but it being published under their names says the was some level of agreement to allow such concept in the book.

While these are the specific parts marked for "Exploratory testing", the way I read it, the whole book is a description of exploratory testing as it was known then - the smart way of testing with care but attention to costs and nature of testing in product companies that lived or died with wasteful practices.





Thursday, November 21, 2019

Tools Guide Thinking

I have spent a significant chunk of my day today in thinking about how exploratory testing is the approach that ties together all the testing activities even when test automation plays a significant role. Discussing this with my team from every developer to test automation specialists to the no-code-for-me-tester, finding a common chord isn't hard. But explaining it to people who have a different platform for their experiences isn't always easy. Blogging is a great way of rehearsing that explanation. 

I frame my thinking today around the idea I again picked up from Cem Kaner's presentation on Exploratory Testing after 23 years - presented 12 years ago. 
"Tools guide thinking" - Cem Kaner
Back then, Cem discussed tools that would support exploratory thinking, giving examples like mindmaps and atlas.ti. But looking back at that insight today, the tools that guide a rich multi-dimensional thinking can be tools we think of as test automation.

We have a tool we refer to as TA - short-hand for Test Automation. It is more than a set of scripts doing testing, but it is also a set of scripts doing testing. To shortly describe the parts:

  • machinery around spawning virtual environments 
  • job orchestration and remote control of the virtual machines
  • test runners and their extensions for versatile logging
  • layers of scripts to run on the virtual environments
  • execution status, both snapshot and event-based timelining
Having a tool like this guides thinking. 

When we have a testing question we can't see from our existing visualizations, we can go back to event telemetry (both from product and TA) and explore the answers without executing new tests. 

When we want to see something still works, we can check the status from the most recent snapshot automatically made available. 

When we want to explore on top of what the scripts checked, we can monitor the script real time in the orchestration tooling seeing what it logs, or remote to the virtual machine it is running and watch. Or we can stop it from running and do whatever attended testing we need. 

We can explore a risky change seeing what the TA catches and move either back or forward based on what we are learning. 

We can explore a wide selection of virtual environments simultaneously, running TA on a combination we just designed. 

We want a fresh image to test on without any scripted actions going on, we take a virtual environment which is at our hands ready to run in 2 seconds it takes to type it into a remote desktop tool. 

It makes sense to me to talk about all of this as exploratory testing, and split it to parts that are by design attended and unattended. A mix of those two extends my exploration reach. 

With every test I attend to either by proactive choice or reactive choice being called in by a color other than blue (or unexpected blue knowing the changes), I learn after every test. I learn about the product and its quality, about the domain and for exploratory testing most importantly, I learn about what more information I want to test for. 

Tool guides my thinking. But this tooling does not limit my thinking, it enables it. It makes me a more powerful explorer, but it does not take away my focus on the attended testing. That is where my head is needed to learn, to take things where they should go. Calling *this* manual is a crude underrepresentation of what we do. 

A Good Week for Exploratory Testing


Exploratory Testing is a 35-year-old approach to testing, created back in its days to describe a style of testing common in Silicon Valley companies that was clearly different from what we saw as mainstream.

In the 35 years of Exploratory Testing, we've grown to understand it better.

We now understand that the core of exploratory testing is the degree to which learning from the test we do now impacts our choice of what we do next.


We understood clearly that the old and traditional way of testing where people design and document tests to be later executed effectively is not what we would call exploratory testing. Exploratory testing is something else.

Where we still struggle is understanding the relationship of test automation and exploratory testing.

Test automation is a process in which we learn. Exploratory testing is a process in which we learn. When we center incremental and iterative, and learning in multiple dimensions, they are the two sides of the same coin.

This was a good week for exploratory testing, because someone many of us follow raised it up by writing about it.
Fowler describes it as:
  • a style of testing that emphasizes a rapid cycle of learning, test design, and test execution
  • avoiding separation of script creation and execution
  • avoiding predetermined expected behavior (being open to more than what we predetermine)
  • seeking to find new behaviors not covered by already defined tests
  • seeking failures defined tests don't catch
  • informal but relying on discipline to do it well
  • something that is good to consider as a task type of its own
  • requiring skill, curiosity and a learning mindset
 Sounds like how great test automation is created! But this is exploratory testing? YES!

Fowler concludes his article: 
Even the best automated testing is inherently scripted testing - and that alone is not good enough.
I argue that the best automated testing is inherently exploratory testing. The bad automated testing is inherently scripted testing. Because test automation, when it really works out, is a product of documenting what we are learning in an executable artifact.

I specialize in exploratory testing. While I read and even occasionally write automation code, it is a platform for a great exploratory testing. It extends my reach. Exploratory testing includes test automation.





In testing, we start with a baseline of knowledge and tools. I start my day with a very different baseline that someone new to my organization. Where exploratory and scripted approaches differ is in where our feedback loops are, and what learning is welcomed. Scripted approach learns about testing in the end, rather than continuously.