Tuesday, August 31, 2021

Stray Testers Unite!

 I have been observing a phenomenon: there are stray testers out there. 

It is unfortunately uncommon that testers find themselves wondering at large or being lost in what it looks like to do a good job at testing. 

For one tester, being stray manifests as them waiting for a project to truly start. There's architecture, there's requirements, there's conversations, there's strategies, there's agreements, there's team building and all that, but whenever there's testing, the early prototypes often feel better left for developers to test. And there is only so much one can support, lead and strategize. 

For another tester, being stray manifests as them following so many teams that they no longer have time for hands on work. They didn't intend to be lost in coordination and management, but with others then relying on them knowing and summarizing, it comes naturally. They make the moves but find no joy. 

For a third tester, being stray manifests as not knowing where to even start and where to head to. With many things unknown to a new professional and little support available, trying to fulfill vague and conflicting expectations about use of time and results leaves them wondering around. 

In many ways, I find we testers are mostly strays these days. We consider how *we* speak to developers but don't see developers putting the same emphasis on the mutual relationship. We navigate the increasingly complex team and system differences, figuring out the task of "find (some of the) bug that we otherwise missed". We have expectations of little time used, many bugs found, everything documented in automation while creating nice lists of bug reports in a timely manner. The ratio of our kind is down, driven to zero by making some of us assume new 'developer' identities. Finding out tribe is increasingly difficult, and requires looking outside the organization to feel less alone and different. 

Communities are important. Connections are important. Caring across company boundaries is important. But in addition to that, I call for the companies to do their share and create spaces where testers grow and thrive. We need better support in the skills and career growth in this space. We need time of our peers for them helping us and us learning together. We need the space to learn, and the expectation and support from our teams in doing that. 

Make sure you talk to your colleagues in the company. Make sure you talk to your colleagues in other companies. It will help us all. Stray testers need to unite. 

Tuesday, August 24, 2021

Social Media, a Practitioner Perspective

Someone at the company I work with invited me to share my experiences on being active on social media about work-adjacent topics, particularly with that they framed as *thought leadership on LinkedIn*. In preparing to share, I did what I always do. I opened a page in my notebook, and started scribbling a model of things I would like to share on. And since I did that step and shared internally, I realized writing it down would be necessary, to notice a slow change of mind over time. 

Where I'm at With Social Media

My story starts with the end - where I am today. 
  • 4100 connections on LinkedIn
  • 7221 followers on Twitter
  • 754 739 blog views over 775 blog posts
  • 450 conference sessions
I didn't set out to do any of this. It all grew one thing at a time. And I don't consider it a significant time investment, it merely reflects doing many little things over and over again over many many years. 

Why It Matters, Though

I recount the story of how I got my latest job. A tweet I shared announcing I'm ready to move, lovely people reaching out in significant numbers discussing opportunities, turning into creating a position together that I would accept. This was my second position built like this, with even better experience than before - I met people both on hands-on side and management before making the mutual commitment to take quality and testing forward together. This was my second position found like this, where I would not have found the job I thoroughly enjoyed without the network. This one was both found with network, and built with network. 

If I didn't have my connections, this would not be possible. 

Traversing the Timeline

Drafting over an electronic whiteboard, I drew a timeline with some of the core events. 
  • 2009 I wrote my first blog post. 
  • 2010 I joined Twitter.
  • 2015 I realized LinkedIn was not for people I know and had met but all professional connections I wanted to uphold.   
  • 2020 I started posting on LinkedIn. 
My history with social media was not that long. And while it may be now strategic, it did not start off that way. 

My whole presence of social media is a continuation of work with non-profits and communities I started in 1993. At first being one of the very few women in tech at Helsinki University of Technology turned me into a serial volunteer. Later this background made me volunteer 2001 - 2011 to Finnish Association for Software Testing. Finally I founded Software Testing Finland ry in 2014. 

Non-profits and communities were important to me for the very same reason social media is now important to me. 
I am a social learner who fast-tracks learning and impacts with networks of people. I show up to share and learn, to find my people and contents that help me forward in understanding.  
I started speaking in conferences to get over paralyzing fear of stages. 
I soon learned that best way to attract good learning conversations was to share what I was excited on, from stage. 
I started blogging to take notes of my learning. 
I later learned that traveling to stages is limiting, when your blog can be be a source of those connections too. 

My Content Approach

If my thinking takes a page or more to explain, I write a blog. 
If I have a thing to make a note of that I can say in public and it can be summarized shortly, I tweet it.
If it is too long to be a single tweet and I want to refrain from tweet storming a series of tweets, I take it to LinkedIn.
If I am seriously thinking it is good content that should be relevant for years, I publish it in one of the magazines or high traffic blog sites known for good content. 
If it can't be public, I have private support groups both inside the company and outside to discuss it. 

I don't have schedule. I have no promises on where I post and when. It emerges as I follow my energy and need of structuring my thoughts. 

Making Time

My most controversial advice is probably around how to make time. 

I have two main principles on how I make time:
  1. No lists. When urge of writing it down to a list hits me, I can just write it to a proper channel. Time to lists is time away from publishing the finished piece.
  2. Write only. I use all of the public channels as write only media. I very often mute the conversations, and if I have time and energy, go back to seeing what is going on. Sometimes I mute on the first annoying comment. And I have taken coaching to find that peace in not arguing and explaining, but expecting people showing up on my mentions for a conversation to approach with curiosity and acceptance that I am not always available. 
I read things, but on my terms and schedule. I read what others write for a limited amount of time and what I see is based on luck and some of the algorithms making choices for me. I read comments and make active choices of when to engage. 

Social media is not my work. I have a testing job with major improvement responsibilities to run. Social media is a side thing. 

Deciding on What You Can Say

Finally, we talk about what we can say. I have carefully read and thought about the social media guidelines my employers have, and seek to understand the intent. My thinking on what to say is framed around professional integrity and caring for people and purpose. Professional integrity means for me that I can discuss struggles and challenges, as long as I feel part of the solutions we are working on. Caring for people means I recognize many of my colleagues read what I write and recognize themselves even in writing I did not think are about them but general challenges that many people recognize. Caring for purpose means thinking about how to do no harm while maintaining integrity. 

We all choose a different profile we are comfortable projecting. What you see me write may appear unfiltered, but I guarantee it is not. 

The impacts of sharing openly are varied. Sometimes knowing what thoughts I am working through is an invitation to people to join forces and see perspectives.  Most often people trust my true enthusiasm on solving each and every puzzle, including ones involving compromises. Sometimes I have offended people.  I've appreciated the one who opened a conversation on the feelings my reflections raised as some of my wishes seemed impossible at a time.

I also remember well how one of my blog posts caused a bit of a discussion in my previous place of work. I still maintain my stance: it was a blog post about how test automation very often fails when done on side to reach the value in projects, a problem I have lived through multiple times. But it was written at a time when someone felt that sting of failure on their technical success was too much to handle. 

I apologize when I hurt people, and I don't apologize for their feelings being hurt, but I work to understand what it was that I do. Apologizing comes with a change I will try to make. 

Final Words

When we were discussing me joining my current organization, my recruiting manager called to an old manager of mine from over 10 years ago. The endorsement came with a realistic warning: Maaret is active on social media, and you want to be aware of that. 

I was hired regardless, and love my work. It is always ups and downs, and being visible/vocal holds power I try to use with respect. 

The platform is part of building a career instead of holding a job. Be who you want to be on social media so that it supports your career. My version is just my version, and yours should look like you. 

Wednesday, August 18, 2021

Future of Testing and Last Five Years?

This morning opened up with Aleksis Tulonen reminding a group of people that he asked us five years ago a question on the future of testing in five years. 

I had no idea what I might have said five years ago, but comparing to what I am willing to say today, the likelihood of saying something safe is high. 

So, what changed in five years? 

  • Modern Agile. I had not done things like No product owner, No estimates and No projects at the last checkpoint. I have done them now. And I have worked in organizations with scale. These rebel ideas have been in scale of the team I work in with relevant business around us. 
  • Frequent Releases. It may be that I learned to make that happen, but it is happening. I've gone through three organizations and five teams, and moved us to better testing through more frequently baselining quality in production with a release. And these are not all web applications with a server, but globally distributed personal computers and IoT devices are in my mix.  
  • Integrated engineering teams without a separate testing group. Testers are in teams with developers. They are still different but get along. Co-exist. Separate testing groups exist in fewer places than before. Developers at least help with testing and care on unit testing. You can expect unit testing. 
  • Exploratory includes automation. Profile of great testers changed into a combo of figuring out what to test and creating code that helps test that. The practice of "you can't automate without exploring. You can't explore (well) without automating." became day to day practice in my projects. 
  • BDD talk. BDD became a common storyline and I managed to avoid all practical good uses. I tried different parts of it but didn't get it to stick. But we stopped using the other words as commonly - specification by example and acceptance test driven development lost the battles.  
  • Ensemble Testing and Programming. It moved from something done at Hunter to something done in many but still rare places. I made it core to my teaching and facilitating exploratory testing in scale at organizations I work at. And it got renamed after all the arguments on how awful 'mobbing' sounds. The term isn't yet won over completely but it has traction. 
  • Testing Skill Atrophy. Finding people 'like me' is next to impossible. Senior people don't want to do testing, only coach and lead testing or create automation. Senior testers have become product owners or developers or quality coaches but rarely stay in hands-on testing. We are more siloed within testing than we were before. And finding "a tester" can mean so many things that recruiting is much harder these days. 
  • Developers as Exploratory Testers. Developers started testing and in addition to small increments - test after cycles taking us to good level of unit testing without TDD, developers started driving and contributing in exploratory testing on different scopes of the system. They were given the permission to do 'overlapping' work and run further than testers got in the same timeframe. 
  • Test Automation University. Test automation became the go-to source for new testers to learn stuff around. Test Automation University, headmastered by Angie Jones and sponsored by Applitools, became a bazaar for materials on many different tools. 
  • Open-Source to Fame. Everyone has their own tool or framework or library. Everyone thinks they are better than others. Very few really know the competition, and marketing and community building is more likely to lead to fame. Starting something became more important than contributing to something. 
  • Browser Driver Polite Wars. Options to Selenium emerged. Selenium became standard and even more options emerged. People did a lot of browser testing and Cypress made it to JS-world radar for real. Playwright started but is in the early buzz. Despite options that are impossible to grasp to so many people in development efforts (there's other stuff to focus on too!) people mostly remained respectful. 
  • Dabbles of ML. First dabbles into machine learning in testing space emerged. This space is dominated by commercial, not open source. And programming was "automated" with Github Copilot that translates well-formulated intentions as comments into code machine learned from someone else. Applications with machine learning became fairly commonly available and bug fixing for those systems became different. 
  • Diversified information. There are more sources for information than ever before, but it is also harder to find. Dev.to and self-hosted blogs are the new go-to, in addition to written content video and voice content has become widely available. Difficult part is to figure out what content makes sense to give time to and we've seen the rise of newsletter aggregators in testing field. 
  • One Community is No More. Some communities have become commercial and in many ways, resemble now tool vendors. Come to our site, buy our thing - paywalls are a thing of the day. At the same time, new sources have emerged. There is no "testing community", there are tens of testing communities. Developer communities have become open to testing and choosing to hang out with people in the language you work in has become something testers opt in for more.
  • Twitter Blocks Afoot! While visible communication is more civil and less argumentative than before, people block people with a light touch. If blocking someone five years ago was an insult, now it is considered less of an insult and more of a statement of curating the things you end up reacting to in the world.
  • Women in Testing. The unofficial slack community grew and became a globally connected group of professionals. Safe space with people like me enabled me to start paying attention to contents of men again and saved many people from feeling alone and isolated in challenging situations. The community shows up at conferences as group photos.
  • DevOps. It is everywhere. The tools of it. The culture of it. The idea that we pay attention to 'testing' in production (synthetic tests and telemetry). 'Agile' became the facilitator wasteland and developers of different specialties grouped their agile here. 
  • Cloud. Products went to cloud first. Supporting tools followed in suite. And the world became cloud-native for many corners of the software development world.
  • Mobile & API. These became the norm. REST APIs (or gRPC) in the IDE is the new UI testing. Mobile has separate language implementations for presentation layers and forced us to split time on Web / Mobile. 
  • Crowdsourcing. It remains but did not commoditize testing a lot. I find it almost surprising, and find this to be a hope for a better future where testing is not paying user to hang out with our applications to pay them peanuts for time and bigger peanuts if they were lucky to see a bug. 
I most likely forgot a trend I should have named, but the list of reflections is already long. But back to what I predicted. 
I don’t think much will change yet in 1, 3 or 5 years, other than that our approaches continue to diversify: some companies will embrace the devops-style development and continuous testing, while others figure ways for releasing in small batches that get tested, and others do larger and larger batches. Esp. in the bespoke software sector, the forces of how to make money are so much in favor of waterfall that it takes decades for that to change.

But if there is a trend I’d like to see in testing, it’s towards the assumption of skill in testing. Kind of like this:

- black-hat hackers are people who learn to exploit software for malice.

- white-hat hackers are double agents who exploit software for malice and then give their information to the companies for good

- exploratory testers are white-hat hackers that exploit software for any reason and give that information for companies. From compromise to more dimensions like “hard to use”, “doesn’t work as intended”, “annoys users”.

- because exploratory testers are more generalized version of hackers, exploratory testing will go away at the same time as software hackers go away. You get AI that write programs, but will need exploratory testers to figure out if programs that are produced are what you want.

I don't think I see my hope of "assumption of skill in testing". I see better programmers who can test and a few good testers. Being a developer with testing specialty is one of the entry level roles and everyone is expected to pick up some programming. Acceptance testing by non-programmers remains, is lower paid and has different time use profile - as much time, but in small deliveries and better quality to begin with. 

My bets on AI will come for the programmer jobs barely fit the 5-year window, but from where we got now, I wouldn't say I am wrong on it. Then again, testers became programmers and we started understanding that programmers don't write code 7.5 hours a day. 

Next five years? Will see how that goes. 



Conflicted in Bug Reporting

In 1999 I was working on writing a research paper on bug reporting systems. I was motivated by the fact that the very first project I started my career on (Microsoft) did the bug reporting tooling so well compared to anything and everything I had seen since and their tool was not made available for the public. It still isn't. 

With a lot of digging into published material, I found one thesis work that included public descriptions of what they had, but it was limited. Thus I read through a vast amount of literature on the topic otherwise and learned bug classification was academia's pet. 

I never finished the research paper, but everything I learned in the process has paved my way to figuring things our in projects ever since. And with another 22 years into really looking at it and thinking of it, I have come to disagree with my past self. 

The current target state for bug reporting that I aspire to lead teams into is based on a simple principle:
Fix and Forget. 
I want us to be in a state where bug reporting (and backlogs in general) are no longer necessary. I want us to have so few bugs that we don't need to track them. When we see a bug, we fix the bug and improve our test automation and other ways of building products across various layers enough to believe that if the problem would re-emerge, we'd catch it. 

With a lot of thought and practice, I have become pretty good at getting to places where we have zero bugs on our backlogs, and as new ones emerge, we invest in the fix over the endless rounds of prioritizing and wasted effort that tracking and prioritizing creates. 

At this point of my career, I consider the efforts to classify organizations internally discovered bugs something we should not do. We don't need the report, we need the fix. We don't need the classification, we need the retrospective actions that allow us to forget the detail while remembering the competence, approach and tooling we put in place. 

At this point of my career, the ideas that I used to promote in 1999 with bug reporting database and detailed classifications for "improvement metrics" I would consider a leap back in time and a reason of choosing something else to do with the little time I have available and under my control. 

I think in terms of opportunity cost - it is not sufficient that something can be done and is somewhat useful when done. I search for choosing the work to do that includes ideas about other work that could be done in the same time if this was not done.

Instead of reporting, reading and prioritizing a bug, we could fix the bug.
Instead of clarifying priorities by a committee, the committee could teach developers to make priority decisions on fixes.
Instead of reporting bugs at all from internal work, introduce "unfinished work" for internal bugs 
Instead of expecting large numbers of bugs to manage, close every bug within days or weeks with a fix or decision. 
Instead of warehousing bugs, decide and act on your decisions.  



Friday, July 30, 2021

How Would You Test A Text Field?

Long, long time ago we used a question "How would you test a text field?" in interview situations. We learned that there seemed to be a correlation of how well the person had their game together to test for such a simple question, and we noted there were four categories of response types we could see, repeatedly. 


Every aspiring tester and a lot of developers aspiring to turn into testers approached the problem as simple inputs and automate it all approach. They imagined data you can put into the field, and automating data when there is a simple way of imagining recognizing success is a natural starting point. They may imagine easily repeatable dimensions even like different environments or speed, and while they think in terms of automating, they generally automate regression not reliability. Typical misconceptions include thinking hardware you run on always matters (well, it may matter with embedded software and some functionalities we use text fields for) or someone else telling them what to test. It used to be that they talked about someone else's test cases, but with agile, the replacement word is now acceptance criteria. Effectively, they think testing is checking against a listing someone else already created, when it is at most half the work. 

Functional testers are only a notch stronger than aspiring testers. They come packed with more ideas, but their ideas are dull - like writing SQL into a text field in a system that has no database. It only matters if there is a connection to SQL database somewhere further down the line. So while the listing of things we could try has more width, it lacks in depth of understanding what would make sense to do. Typical added dimensions for functional testers are environments, separating function and data, seeing function from the interface (like enter vs. mouse click), and applying various kinds of lists and tools that focus on some aspect like html validators or accessibility checkers or security checkers. Usually people in this category also talk about what to do with the information that testing provides and writing good bug reports. On this level, when they mention acceptance criteria, they expect to contribute to it. 

The highest levels are separated only by what the tester in question starts with. If they start with the *why would anyone use this?* and continue questioning not only what they are told but what they think they know based on what they see, they are Real senior testers, putting every opportunity to test in context of a real application, a real team, and a real organization with real business needs and constraints. If they start with showing off techniques and approaches and dimensions of thinking, they still need work on the *financial motivations of information* dimension. The difference to Close to Senior tester level is in prioritizing in the moment, which is one of the key elements of good and solid exploratory testing. Just because something could be tested it does not mean it has to be, and we make choices on what we end up testing every time we decide on our next steps. 

If we don't have multidimensional ideas of what we could do, we don't do well. If we don't put our ideas in an order where we are already doing the best possible work in the time available when we stop without exhausting our ideas, we don't do well. 

With years of experience with the abstract question, I started moving to making the question more concrete and sharing something that was a text field on the screen and asking two questions:

  • What do you want to test next?
  • What did you learn from this test? 
I learned that the latter question in general helps people do better testing than they would without the coaching that sort of takes place there, but I don't want to hire a tester who is so stuck on their past experiences that they can't take in new information and adapt. I've use four text fields as typical starting points:
  1. Test This Box. This is an application that is only a text field and a button, and provides very little context around it. Seniors do well in extracting theory of purpose, comparing it to given purpose, deal with the idea that it is first step to incrementally building the application, learn that while the field is not used (yet), it already displays and that the application has many dimensions in which it fails that are not intended. 
  2. Gilded Rose. This is a programming kata, a function that takes three inputs, and inputs could just as well be text fields. Text field is just an interface. The function has a clear and strong perspective to code coverage but also risk coverage - like who said you weren't supposed to use hexadecimal numbers? Using this text field I see ability to learn and this is my favorite one when selecting juniors I will teach testing but who will need to be picking up guidance from me. Also, if you can't see that code and IDE is just a UI when someone is helping you through it, I feel unsure in supporting you in growing to be a balanced contemporary exploratory tester who documents with automation and works closely with developers.  
  3. Dfeditor animations pane. This is a real size application where UI has text fields, like they all do. The text field is in context of a real feature, and a lot of the functionality is there by convention. This one reveals me if people discover functionality, and they need to be able to do that to do well in testing. 
  4. Azure Sentiment API. This is an API with a web page front end, but ML implementation recognizing sentiments of text automatically. This one is hardest to test and makes aspiring testers overfocus on data. For seniors it really reveals if people can make a difference between feedback that can be useful and feedback that isn't so useful through connections of business and architecture. 
Watching people in interviews and trainings, my conclusion is that more practice is still needed. We continue to treat testing as something that is easy and learned on job without much focus. 

If I had the answer key to where bugs are, wouldn't I trust that the devs can read it too and take those out? But alas, I don't have the answer key. My job is to create that answer key. 


Thursday, July 29, 2021

Tester roles and services

An interesting image came across on my twitter timeline. It looked like my favorite product management space person had been thinking and modeling, and created an illustration of the many hats people usually have around product creation. Looking at the picture made me wonder where is testing? Is it really that one hat for one category of hats? Is it the reverse side of every single one of these hats? Don't pictures like this make other people's specialties more invisible?

As I was talking about this with a colleague (like you do when you have something on your mind), I remembered I had created a listing of the services testing provides where I work. And reading through that list, I could create my own image of the many hats of testing, 

  • Feature Shaper focuses on hat we think of as feature testing. 
  • Release Shaper focuses on what we think of as release testing. 
  • Code Shaper focuses on what we think of as unit testing. 
  • Lab Technician builds systems that are required to test systems. 
  • On-Caller provides quick feedback on changes and features so that no one has to carry major responsibilities alone.  
  • Designer figures out how we know what we don't know about the products. 
  • Scoper ensures there's less promiseware and more empirical evidence. 
  • Strategist sets us on a journey to the future we want for this project, team and organization. 
  • Pipeline Architect helps people with their chosen tools and drives the tooling forward. 
  • Parafunctionalist does testing on the top skills areas extending functional: security, reliability, performance and usability. 
  • Automation Developer extends test automation just as application is extended. 
  • Product Historian remembers what works and what does not and if we know so that we know. 
  • Improver tests product, process and organization and does not stop with reporting but drives through changes. 
  • Teacher brings forward skills and competencies in testing. 
  • Pipeline Maintainer keeps pipelines alive and well so that a failing test ends up with an appropriate response. 
With all these roles, the hats overall in my team are distributed to entire team, but already create a reality where no two testers are exactly the same. And why should they be: we figure out the work that needs doing in teams where everyone tests - just not the same things, the same way. 

Wednesday, July 28, 2021

The Most Overused Test Example - Login

As I am looking for a particular slide I created to teach testing many, many years ago, I run into other ones I have used in teaching. Like the infamous, most overused test example in particular in the test automation space - the login.

As I look at my old three levels of detail example, I can't help but to laugh at myself. 


Honestly, I have seen these all. And yet while it is only a year since I last tested a login that was rewritten, I had zero test cases I wrote down.

Instead, I had to find a number of problems with the login:

  • Complementing functions. While it did log me in, it did not log me out but pretended it did. 
  • Performance. While it did log me in, it took its time. 
  • Session length. While it did log me in, the two different parts of it disagreed on how long I was supposed to be in, resulting in fascinating symptoms while being logged in long enough combined with selected use of features.  
  • Concurrency. While it did log me in, it also logged me in a second time. And when it did so, it got really confused on which one of me did what. 
  • Security controls. While I could log in, the scenarios around forgetting passwords weren't quite what I would have expected. 
  • Multi-user. While it logged me in, it did not log me out fully and sharing a computer for two different user names was interesting experience. 
  • Browser functions. While it logged me in, it did not play nicely with browser functions remembering user names and passwords and password managers. 
  • Environment. While it worked on test environment, it stopped working on test environment when a component got upgraded. And it did not work in production environment without ensuring it was setup (and tested) before depending on it. 
I could continue the list far further than I would feel comfortable. 

Notice how none of the forms of documenting testing suggest finding any of these problems. 

Testing isn't about the test cases, it's about comparing to expectations. The better I understand what I expect, the better I test. And like a good tester, if I know what I expect, I tell it in advance and it still allows me to find things I did not know I expect - with software under test as my external imagination.