Friday, November 26, 2021

Removing Robot Framework From My World

Making progress in removing Robot Framework from my world. Why, you might ask, especially when so many managers in Finland have been taught to expect it. If you do something where you frequently need help of a community to learn it further, why would you choose something where information search isn't giving you a large community?

I put a complete newbie through a month of Robot Framework and a month of pytest. Pytest won, hands down. My key takeaways:

  1. Information search is completely different experience: both quantity and *tone* in responses the communities provide is different and developer-first communities do better in leveling materials for fast-tracking newbies
  2. Debugging tests in IDE, you'll need it and save tons of time avoiding abstractions that take some of that power away from you. Running a single test from a suite - pure bliss in a suite that takes an hour to run
  3. One less layer, one less source of problems. Tools have bugs. You can always let the bugs hit others and carefully select the time when you allow for new, but foot on brake is energy away from where it should be. And when it is not bugs, you are waiting (or contributing) to the translation layer to get the new cool stuff from the underlying library into your use.
  4. Learn more. Robot Framework has become a synonym for bad programming combined with bad testing. In recruiting, it is more likely to mean "not good" than the other way around. Sadly, but this is the trend. Making easy things easier is great, but making hard things harder gets people to stop at easy over important. Look at the behaviors people have with tools, there is a difference that matters

In an hour of contemporary exploratory testing, we can go from this: 


to this - with people who have never written a line of code, with power of ensemble programming. 
Next up, recognize the ten bugs these tests document as "works as implemented". 


Sunday, November 21, 2021

Balancing the Speaker Circuit

On this lovely Sunday, two different conferences representatives ended up in the things I read, with a very similar message. Both conferences would love to have more women speak, but both also suffer from the same problem - women won't submit.

With this thought on my mind, I browsed further on the random things people share and came by a visualization on why it is a problem that hospitals have as many vaccinated and unvaccinated people. This lead me to think on the tech conferences problem on equal representation even further.

When the size of population is large enough, a small location such as a conference stage is easy to fill. All you need is five "best people" of your two categories, and your stage is full. With a population large enough, you can't claim that the people you choose while using some helpful categorization aren't the best - you can't have all on display anyway, and for almost any topic that a representative in one group could present, you could find a representative in another group. 


This takes us to why it matters. Conferences model the world we expect to have. Every speaker invites speakers who identify with them to feel included - for their topics, for their experiences and for their representation. The world has a little more women than men, and that is what tech of the future should look like. Intelligence is distributed in the world, and we want to bring in intelligent people to contribute in tech. 


The current reality we source our speakers from, however, is the current tech industry where we still have a lot more men than women. And with this dynamic, it means there is extra work included in being part of the minority group. We still have plenty of people in both groups to choose from, but the majority group uses less energy in just existing and thus has more energy to exert in putting themselves forward. 

Expecting equal showing up for people in both groups to respond to call for proposals, would expect existing was equally laborious. 

While we don't have equality, we need equity - particularly reaching out to the minority group so that we can have a balanced representation. We shouldn't choosing the "best out of people who did the free work for us" in CFP, we should be choosing best lessons our paying audiences benefit from in creating the software of the future. 

They say that they won't come to your home to find you, but with power of networks, we could easily source the people - both men and women - from the companies doing the work and thus qualified to share the work without making the people themselves to do the work of submitting to a call for proposal.

With 465 talks under my belt, I still get rarely paid for teaching I do from all the stages. But the longer I am at this, the more I require that the conferences who won't pay for my work do their own work of reaching out for a balanced representation. I'm personally not available without an invitation and consider the invitation a significant part of not having to invest quite so much into speaking. But I think this is true to people who need the extra step of inviting to feel welcome, and to dare to consider taking the stage. 

Find the ones who aren't in current speaker circulation, and invite them. Nothing less is sufficient in this time of connectedness. 

 

Tuesday, November 16, 2021

Increasing Understanding of Modern (Exploratory) Testing

Many, many years have passed since I published an article with a title: Increasing Understanding of Modern Testing Perspective (2003). What I argued on back then is on V-model being harmful, and funny enough, it has been years since I have run into that model anywhere but in academic writings. We treat unit, integration, system, and acceptance testing very differently these days. 

Now that I seek to understand and explain modern testing, I seek to understand and explain exploratory testing - the approach. A few months ago I wrote about how we have plenty of ways around to talk about it and confuse people, so adding labels to make sense of the difference between two is necessary. Just as we did not need acoustic guitar before we had electric guitar to distinguish from. 

I seek to find ways to talk about contemporary exploratory testing, which has recently given me great results at work. 

  • We now fairly regularly release new versions of our firmware and upgrade the customers automatically for a particular experimental product
  • We moved from 34 working days of release testing to 2 days of release testing
  • We release two (soon three) products simultaneously when we used to release only one
  • We have 39% test automation coverage (as per features we've put to production with a none, some, good enough levels) and reliability of tests have moved to hours to fix from weeks to fix
  • We find things that used to escape us as per bugs
We do a better job with this idea of intertwining automation into exploratory testing. Same people explore with and without code. 

But to explain that what we do is different, I've been seeking ways to visualize that. I tried explaining what we have different ways to talk about exploratory testing

I've tried explaining that we apply exploratory testing in different scopes, and my scope includes the whole - contemporary exploratory testing is an approach to testing, not a technique. 
I've tried explaining we frame what belongs inside the box of exploratory testing differently - contemporary exploratory testing includes test automation, very explicitly. 

I'm still processing what might be the helpful ways of explaining which kind of exploratory testing we are sharing results on. Because my fact is, my results now are significantly different to my results back in the days of the other kinds, and I've been through these all. 

Labels help me sort out my past to my today, and hopefully share better on what my today is. I've tried doing that with my talks recently - on Contemporary Exploratory Testing, Test Automationist's Gambit and Hands-Off Exploratory Testing to Manage in Scale. 




Thursday, November 11, 2021

The Ways Bugs Cost Us

When I was growing up to be a tester, I learned to think in terms of importance when it comes to bugs. 

Working with remotely installable antivirus, the bugs that would block us from remotely fixing the broken remotely installed antivirus, those would be tough ones on the global market. And I learned that really well by one day distributing a new version to early adopters where was was a high level exec, and sending someone over to their home to fix their computer that no longer could get online on their remote day. We considered it so important that there was a really insightful design of a feature that would enable fixing. 

I was thinking about this bug yesterday, when I was watching a sales colleague wonder about a device installed up in the air out of his reach, figuring out if we really would need to lift him up there with a computer to know what was going on. This time we were lucky - time was enough to resolve the issue but time needed was long enough to make us worry. Positive outcome though was to really be part of the experience and build a better connection with a colleague I don't always work with closely - that relationship usually turns into magic over time. 

What the experience with the problem that the kind of problem I thought it would be, I stopped to reflect on how the world as I know it has changed.

Importance of the bug is less central now. The speed of analysis, and applying a fix is the new essential thing. 

I've had seemingly small problems (in terms of mistakes we made in creating software) that took a long time to fix because in a multi-team distributed system, finding the right person feels like the jokes in which you knock each door only to be directed at another one, to end up back where you started with more information. 

If this throughput time to resolution is intertwined with the importance in scale, time escalates the problem. 

Time from report to a fix in the customers environment is key. 

Every customer matters. 


Sunday, November 7, 2021

Ensemble Programming and Behaviors for Hands, Brains and Voices

While Agile 2021 was ongoing in summer, I was doing an observation activity that I never got around to finish. I was organizing series of new groups trying out ensemble programming, and watching what they do. What I ended up with was material I just run into today, that I had titled "Behaviors for Hands, Brains and Voices". 

If you have every been wondering what should you do when ensemble programming, this listing might be beneficial. 

The Roles

We have established we have three roles in Ensemble Programming:

  • Hands (driver) are on the keyboard and don't make decisions. 
  • Brains (designated navigator, talker, translator, pilot) is the current main decision-maker and uses words to enable hands to work effectively. 
  • Voices (other navigators) are everyone else who support brains in getting the work done by providing added timely, correct information.

Sounds easy, but looking at what people do, what are typical things people in these roles do? What are the behaviors we could observe and fine-tune to get our group working really well together? 

Behaviors for Hands

  •  Ask clarifying question on what to type
  • Write/Do intentionally something the Brains did not mean to model correcting
  • Write slowly to encourage thoughtful navigation
  • Out of two ways of doing what the Brains ask, choose the one you think is worse to see the ensemble's reaction
  • Listen to the Brains carefully and do what requested to your best ability
  • Listen to everyone and ask the Brains to make choices when you recognize multiple requests

Behaviors for Brains

  • Give instructions to hands on pace they can consume
  • Navigate on level of intent and drill in through location to details if you see no movement
  • Choose a solution from the ensemble you would not have chosen and give it a chance to unfold
  • Invite proposals on the solution  or next step from the Voices before deciding where to go
  • Listen to the Voices making proposals and help the Hands choose what to do
  • Navigate on high level of abstraction focusing on reviewing direction and implementation and make space for voices to improve end result 

Behaviors for Voices

  • Make an observation about the application and point that to out to others
  • Make an observation about the group working together in the moment and point that out to others
  • Notice someone trying to make a point and support them in getting the space
  • Notice someone not engaging an invite them to contribute
  • Categorize what you want to say as say now (you need to hear this as it changes what we do to be right), soon (you need to hear this on this thread of conversations) and later (I want to say this but it can wait as long as I remember)
  • Raise hand to indicate you want to say something but it isn't urgent enough to interrupt
  • Propose a better way of doing what is being done right now 
  • Ask a question that improves focus and gets the ensemble moving forward
  • Recognize need to talk about how we work and propose a retrospective discussion
  • Offload as post-it on a shared wall your ideas of next activity the ensemble should focus on
  • Make quietly notes of bugs the ensemble isn't seeing to come back to them soon
  • Propose to make a shared note on the shared computer for documentation / group synch purposes
  • Correct small mistakes like typos after giving the hands (driver) a chance to correct at time fitting their writing style
  • Point out possibility of passing-by cleanup or test
  • Point out possibility of cleanup before changing the area
  • Invite group to brainstorm solutions
  • Invite group to choose least likely solution to be implemented first
  • Point out to group if we are not doing what we agreed to be doing
  • Point out to group if we appear to be doing what we agreed but not important
  • Suggest changes in how the current work is done, e.g. "could we test in another browser / with another data sample?"
  • Directly speak to the Brains to help them improve their navigation


Reacting to Numbers

A significant part of my work is to explore how testing is done where I work, in scale, to figure out what I should help with, what I should focus on, and what might even be going on. And it is not like that would be an easy task. 

Talking with people in scale is difficult. I can do questionnaires in scale, but there's only so many meaningful conversations I can have in a day of work. But in the last year, I have started to be able to give shape to describing those conversations with numbers, knowing my monthly people I connect with is about 60 people, where 30 are constant over the project / team I focus on, and other 30 are people I connect with some month, having other group of 30 another month. 

From conversations I have had, I have found out where people keep their artifacts, and sampled them. I've had conversations of comparing their artifacts to the things they tell, and come to the conclusion that pulling help actively is a hard thing to do, because help people would know to pull is limited to what they know they know. 

In addition to sampling their artifacts, I have counted them. And last Friday I showed to a group of peers numbers around counting changes (pull requests) to a particular artifact (test automation systems code) for a conversation, getting bit of a conversation I did not expect. 

Personally I look at the quantitative side of pull requests as as invitation to explore. A small number, a number different than what I would expect and large number all require me to ask what happens behind that number. I am well aware that pull requests to test automation represent only a part of the work we do, and that I could create a higher number artificially splitting the changes. But what a number tells me is that nothing will change if we don't change anything. A number of test automation pull requests in relation to pull requests to the application that automation tests tells me a little bit about how we work on things that go together (app and its test), and number of people contributing to code bases tells me a little bit on how tightly specialized maintaining test code bases is. There's not a number I expect and target, it is a description of what it turned out to be. 

If I ask for a number, or go get a number, I find the idea of "you must tell me what exactly what question you are trying to answer" peculiar for someone who is exploring. My questions are absolutes, but probes. Exploring, like with failing test automation, calls me to dig in deeper. It is not the end result, it is a step on my way. 

Distrust in numbers runs deep. And while I decided to be ok trusting managers with numbers, I have been learning that the step that was difficult for me is even more difficult for the others. So it's time to make it less special, and normalize the fact that numbers exist. Interpretations exist. And conversations exist, even ones I would like to not have because they derail me from what I would like to see happen. 


Friday, November 5, 2021

Turn work samples into a portfolio

Last few weeks, I have had the pleasure of discussing testing in general, test careers and finding your start with Okechukwu Egbete. He lives in Finland (Oulu), completed a financial studies to a degree here, and has been working towards finding a great position to grow into testing. In addition to great inspiring conversations, we've been pair testing together and geeking about peculiarities of software quality and this industry. 

One of the conversations we had this week was about him being a little busy with various possible positions, each asking for a homework sample. Imagine this from the job seeker perspective: every single company has 8-16 hours expectation of exercise to show you are worthwhile candidate. And not only that. Every company has a different exercise. And as I was reminded of how this work, some companies have the exercise before they talk with you, without even the basic level of verifying that they aren't wasting the candidates time. 

It is easy for me to say that I find that companies are a at least on verge of misusing their position with regards to selecting the candidates. Having spent at worst 2 full days being assessed by psychologists and potential colleagues through delivering training, doing homework and filling complex loops of papers for a single position I rejected for the final interviewers attitude after they offered the position, I can appreciate the load companies feel they have the right to expect without compensation. 

When finding that first opening, you won't walk away at the end. But the work needed can be even more significant. 

So I propose turning the activity into a +1 for you. When a company sends you the exercise you do, create a private project on github for that activity as well as all other companies' activities - turn the work you do as it is your work, into your portfolio. And include the growing portfolio of samples into your application. Mark clearly what of the things you have you think is your best work. Show level of effort you've put into different samples. Don't publish the test problems companies have, but use your solutions to those problems as part of your continued job search. 

It would impress me if people did that - as long as you show you are mindful of not making anyone's future recruiting efforts harder.