Friday, January 28, 2022

Software Maintenance

This winter in Finland has not been kind to our roads. I got to thinking of this sitting on the passenger seat of a car, slowly moving on a ice covered bumpy road, with potholes in the ice left from the piles of snow that did not get cleared out when weather was changing again. The good thing about those potholes are that they are temporary in the sense that given another change of weather, they get either filled or the ice melts. Meanwhile, driving is an act of risking your car. 

Similar phenomenon, of a more permanent type without actions, happens for the very same weather reasons, creating potholes in the roads. Impact for the user is the same, driving is an act of risking your car. Without maintenance, things will only get worse.



This lead me into thinking about software maintenance and testing. Testing is about knowing what roads need attending to. Some roads are built weak and need immediate maintenance. Others will degrade over time under conditions. Software does not keep running without maintenance any more than our roads can be safely driven on without maintenance.

Similarly, we have two approaches to knowing roads could use maintenance and testing of software:

  • automation: have someone drive through the road and identify what to fix out of a selected types of things that could be off
  • thinking: recognize conditions that increase risk  of selected types of things or could introduce new categories of problems we'd recognize when we see it but may have trouble explaining
Knowing you need maintenance is start of that maintenance. And having the machinery to drive through every road is an effort in itself, and we will be balancing the two. 

We care about knowing. But as much as we care about knowing, we care about acting on the knowledge more. 


Friday, January 21, 2022

In Search of Contemporary Exploratory Tester

We had just completed our daily, and a developer in the team had mentioned they would demo the single integration test they had included. Input of values to Kafka, output a stream, and comparing transformation in the black box between the input and output. I felt a little silly confirming my ideas of what was included in the scope thinking everyone else was most likely already absolutely clear on the architecture but I asked anyway. And as soon as I understood, I knew I had a gem of a developer in the team. From that one test (including some helper functions and the entire dockerized environment, I had the perfect starting point for the exploratory testing magic I love. But also, I realized I could so easily just list the things pairing with the dev, and we could fix and address the problems there might be together. Probably, I could also step away and see the developer do well and just admire the work. That's when I realized: I had finally landed a team where I would not get away with the traditional ideas of what testing would look like. 

This week I have been interviewing testers and the experience forces me to again ponder what I search for, and how would I know I have found something that has potential. Potential is the ability and willingness to learn, and to learn requires wanting to spend time with a particular focus. 

Interviewing reminded me of the forms of bad ideas: 

  • the developer who does not enjoy testing as the problem domain
  • the tester who has not learned to program, think in architectures nor work well with business people (the *established exploratory tester*)
  • the test automator who made particular style of programming manual tests to automation scripts their career (the *unresultful automator*) 
  • the total newbie who wants to escape testing to real work as soon as they can
I don't want those. I want something else. 

So I came up with two forms of what I may be looking to fill from a perspective of someone to hold space for testing. 

  1. Test systems engineer
  2. Contemporary exploratory testing specialist



Test systems engineer is a programmer who enjoys testing domain, and wants to solve problems in testing domain with programming. They want to apply programming with various libraries and tools to create fast feedback mechanisms to support the teams they contribute in. They won't take in manual test cases and just turn them to automation, but they will create an architecture that enables them to do the work smart. For this role, I would recruit developers with various levels of experience, and grow them as developers. A true success for this role in a team looks like whole team ownership of the systems they create, enabling them with a mix of test systems and application programming over time.

Contemporary exploratory test specialist is really hard to find. It is a tester who knows enough of programming to work with code at least in collaboration (pairing/ensembling), can figure out change by reading commits (as nothing changes without code changing with infra as code) and target testing for changes combining attended and unattended testing. Being put in a place where there is a choice of not ever looking at the application integrated as there's appearance of being able to do all in code, this would choose to experiment what we may have missed. Nudging left and whole team testing would be things - not waiting to the end of the pipeline, but building and testing branches for some extra exploring, pairing on creating things, reviewing unit test, defining acceptance criteria and examples are some of the tactics, but everything can also be shared with the team. Meditating on what we might have missed in testing and understanding coverage are go-to mechanisms. 

I think I will need to grow both. Neither are usually readily available in the market. And no one is ever ready in either of the two categories. 

The better your whole team, the more you need to be a chameleon that just adapts to the environment - and holds space for great testing to happen. 

Thursday, January 20, 2022

One Company, Five Testing Archetypes

In last two years, I have had an exception opportunity to deepen my understanding of the context archetypes, working in a company that encompasses the five. But before we dive into that, let me share an experience.

Facilitating a workshop on what successful looks like in test automation, I invite pairs of people to compare their experiences and collect insights and related experiences on what made things successful. The room is full of buzz as some of the finest minds in test automation compare what they do, and what they have learned. The pair work time runs out and we summarize things and here's what I learn: test automation is particularly tricky for embedded systems. 

For years, people could silence me by telling me how embedded systems are special. And sure they are, like all systems are special. But one thing that is special about working with embedded systems is how convoluted "testing" is. And today I wanted to write down what I have learned of the five types of archetypes of testing that make talking about testing and hiring "testers" very complex even within a single company. 

Software and Systems Testing is the archetype I mostly exist in. Coming to a problem software first, understanding that software runs on other software, that runs on hardware. That hardware can be built as we build software from components of various abstractions to integrate, or it can be a ready general purpose computer, but it exists and matters to some degree. Like a jenga tower, you will feel shaky on the top layers when the foundation is shaky, and that is usually the difference that embedded systems bring in.  But guess what - that is how you feel about software foundations too and there is a *remarkable* resemblance to embedded software development and cloud development in that sense. 

With software and systems testing, you are building a product in an assigned scope, overlaying your testing responsibility with the neighboring teams, and a good reminder on this archetype is that for testing, you will always include your neighbors even if you like to leave them out of scope of consideration for development purposes. Your product may have 34 teams contributing to it (I had last year) or it might have a lot simpler flow considering end to end. Frankly, I think the concept of end to end should be immediately retired looking at things from this archetype's perspective. 

Hardware (Unit - in isolation / in integration and Compliance) Testing is something I have been asking a lot of questions on in the last two years, and I find immensely fascinating specialty. The stuff I loved about calculating needed resistor sizes are the bread and butter in hardware, but making it only about the electronics / mechanics would be an underappreciating way of describing the complexities. Hardware testing is a specializing field with many things worthwhile standardizing and standards compliance makes it such a unique combo. The integrations of two hardware components have its own common sets of problems, and the physical constraints aren't making life simple. Hardware without software is a bit lifeless box, so hardware testing in integration and particularly compliance testing already brings in software and system perspectives, but from a very specific slice perspective. 

Really great hardware testers think they don't know software testing yet do well on helping figure out risks of features. I can't begin to appreciate the collaboration hardware designers and hardware testers are offering. The interface there, particularly for test automation success is as close to magic as I have recently experienced. 

Production Testing is the archetype that surprised me. Where hardware testing looks at design problems with hardware, production testing looks at manufacturing problems with hardware. And again, since hardware without software is a lifeless box, it is testing of each component as well as different scopes of the integrated systems. The way I have come to think of production testing, it is the most automation system design and implementation oriented kind of testing we do. Spending time on each individual piece of hardware translates to manufacturing costs, and the certainty of knowing the piece you ship to the other side of world will be quality controlled before sending is relevant. 

Being able to connect our production testing group to great unit testing trainer is one of my connections to learn to appreciate this. Ensembling on their test cases, seeing how they are different, reading their specs. And finally, being on the product side to build test interfaces that enable the production testing work - I had an idea but I did not understand it. 

Product Acceptance Testing is the archetype of testing I could also just call acceptance testing, and it is associated with promise of a delivery of a project not a product. If you need to tailor your product before it becomes the customer's system, you will probably have need of something like this. We call it FIT or FAT (in both cases, it's the integrated system but either in artificial or real physical environments) and the test cases have little in common with the kind of testing I do within Software and Systems Testing. This is demonstrating functionalities with open eyes, ready for surprises and really wishing there were none. 

IT Testing is the final archetype focused on the business systems that run even software companies. There may well still be companies that don't have software products (even if the transformation is well on its way to make every company a software company), but there are no companies who have no IT. Your IT system may be that self-built excel you use, or a tailored system you acquired, but when it runs *your business*, you want to test to see your business can run with it. Not being able to send invoices terminating your timely cash flow has killed companies. 

The difference in IT Testing comes from lack of power. The power is in money starting projects. The power is in agreeing more precisely what is included, or agreeing to pay by the hour. Constraints are heavy on the IT systems you are using as foundation, because this is not the thing you sell, this is something that enables you to sell. 

The archetypes matter when we try to discuss testing. Because it is all testing. It is just not working with the same rules, not even a bit. 


Monday, January 17, 2022

Scaling Testing Bubbles

Back in the days of face to face conferences and golden era of paid testing conferences, we had up to 150 people come together for two days to discuss testing in Finland. Going to large international conferences, seeing audiences up to a thousand was typical. Living in the testing conferences bubble, you would meet other testers from other companies and other countries, with an occasional brave developer by trade tipping their toes into the mix.

As years passed, I started noticing that half of the speakers are usually from the same circuit, learning more and sharing more each time. People came and got started at speaking, some stayed around in the circuits, others faded away focusing on changing things from within the organizations that employ them.

Inside companies as years passed, the position of testers also changed. When in the past you could expect to find a condensed group of testers, agile sent everyone to teams and a typical team would have one specializing tester. At the same time communities of testers became even more active. Communities within companies, and communities that are connected by local or global thread emerged. 

The world started to look more like this. A single tester, no other testers in sight. 

Picture 1. Single Tester

But there were other great colleagues. When you no longer had that little group of testers who could feel like they were up against the world, we were no longer up against anyone. Instead, we had brilliant developer colleagues and great collaboration attempts. In a team for every tester, we had 7 developers. 

Picture 2. Tester embedded in development team

Similarly, development teams did not exist in organizations in a vacuum of no one else around us, but we had great specializing colleagues: product owners, engineering managers, support, sales, marketing, legal, localization, documentation, user experience, you name it. In a typical organization, that would add 7 other stakeholders to the 7 developers we'd get to figure software out with. 

Picture 3. Development team with stakeholders

The development team with stakeholders don't build the software just for their personal enjoyment, but have a group of users in mind. Users for worthwhile problems to automate with code would come in hundreds, thousands, millions. 

Picture 4. Development organization with users

With the strengthened relationships with the developers and a new fact of repetition we can no longer deny and avoid with continuous integration and embracing change, testing became something that isn't the testers job, it is job of everyone in the development organization. Finding someone to hold space for testing became something that needs to be intentional and assigning responsibilities on looking after the work moved from test managers to the testers in the teams. 

In the organizations, we had many teams and we could find a few colleagues with similar centers of focus - and build an internal community, topic popular in the recent years. We'd invite everyone to join, on the theme of testing, not testers. 

In the communities at large, we would find others trying to figure out their tester and testing corners of the world, and have strength in numbers. 

How Many Testers Are There? 

Drawing these pictures of that one tester hiding in the sea of heads made me wonder how many testing specialists are there? Clearly programmers sharing the testing work are a significantly larger group already within scale of one team, but as we scale up the number of teams to an organization, to a country or to the world, what does this really look like. To have ideas on this, I went to statistics.

In Finland 6,8 % of workers are in ICT sector. Going for the number of people employed in 2020: 2 835 000 people, and calculated the number of people in ICT: 192 780. Just a little short of 200 000. I double checked other statistics discovering the numbers to be more like 120 000, but decided the first number was good enough. If we had 200 000 people in ICT and one out of 15 was a tester, we would have 13 000 testers and 91 000 programmers. 

They also report that we need 7000 new programmers each year, thus needing 1000 more testers every year. 

In my search of statistics, I also learned the EU average for percentage of workforce in IT is 3,7 % but Finland being a tiny country with 5.5M people, there's a few tester folks still available for networking and conferencing. 

Why Should We Care?  

I find it fascinating to look at how things have changed, and will be changing. We feel that the number (or at least proportion) of testers has been going down from 1:1 recommendations of the tester golden eras - although a lot less shiny now that we know what works better, developers owning quality and testing over externalizing it. At the same time, number of teams doing software development has gone up, and it may just be that we are generating less specializing testers. Similarly, we no longer recruit just programmers. We recruit embedded device programmers, full-stack programmers, and devops folks, and reliability engineers, and data analysis specialists are all specialist forms of programmers. 

We need to keep our flexibility and grow with the industry. And the industry is still growing. 

We'll serve our hundreds, thousands, millions of users with quality software and timely delivery. Let's find the mix that invites in the diverse workforce and enables every one of us to feel the work we figure out is work worth doing. 




Friday, January 14, 2022

Advice for the New Tester

When I started testing back in the days, my experience was not very far from what new testers get these days. I had little idea what testing was, was told to 
  • look for any and all discrepancies in a test application and 
  • report my findings clearly
  • limit my work to 1 hour in the classroom where 7 others were doing the same 
I passed whatever criteria they had for my results (in comparison of what they could expect) and my reports, and benefited for the first time in my inability to say no to opportunities. I became a tester, and they paid me for doing more of what I had done for that one unpaid hour.

Over the years, I got more experiences of interviewing for tester positions. I remember one with homework since the recruiting manager spoke later on university testing course I was teaching about statistics he had collected from those exercises, claiming no one knew how to use equivalence class and boundary value analysis technique, and in days I knew less than today, I had deeply studies the simpler forms of those techniques.  Yes, I thought I knew the techniques when I taught at university and when I got the job from that homework and interview, but learned later there's layers I did not understand, thanks to Cem Kaner. 

If you are a new tester these days, your entry to the industry will be both similar and different. You will be chosen, eventually, because people believe in your potential. Your personality and attitude, and skills in learning will matter more than anything else. But unlike my start where a keen eye for seeing problems was sufficient, now the companies are almost certainly going to quiz you on your ability to learning code. It will be unlikely you will sit a "testing exam" in a classroom with other candidates and likely to be sent a test design homework and test automation homework, or the two combined. You might also be sent a programming homework, perhaps an online one with automatic grading. 

My advice to you, dear new tester, split into two parts. 
  1. Foot in the door. You got that first position. Now what? This is just the beginning and what you do after matters. 
  2. Looking for doors. You don't have that position, knocking needs to start or extend. How to go about that? 
Foot in the Door

Bling. The all familiar Slack message sound and the little red bubble saying someone wants to talk to me. The message explains that they were hired as a tester through a recruiting program, are the first and only tester in the organization. The ask of what they should do is unclear, and they've just been told not to interrupt others doing important work. 

A few messages later, we meet to pair. We look at their application to test over a screen share and discuss what they should do with examples of doing things and finding issues with their application. An hour later they continue their independent path.

A week later they message me again: they received praise about the results they were able to provide in testing

Getting that first position isn't the end goal. It is the first step. Now the work starts. Your career, from the first step, is too important to be left for your manager. It's your responsibility and if you are lucky, your manager can help you with whatever you need on those early days. Different managers can help with different things, and some aren't skilled in guiding a new tester but are excellent at recognizing when you find that right thing they want to see you amplify with praise. 

Here's my advice on what you would want to figure out: 
  • Center the product, features and quality. Spend time with the application. Sprinkle pieces of documentation but don't try to read up on everything before starting to test. Read while you test. Focus on how the feature can work and see it work. The foundation of your work will be empirical touch. Use it. Spend time on it. "It" can be a user interface or an API, but understand what is supposed to do and confirm you see it do that. Ask yourself why the feature matters, and when you don't know, ask. You will become a product expert when testing, you are not executing someone else's queries only.  
  • Make notes. Mindmapping what you have seen, naming and grouping  is a great practice. Collect questions and create a way for you to track what questions you have answers for. Ask questions in conversations you have, and initiate those conversations. Prioritize your questions and seek answers to them in documentation in addition to people, but don't rely only on documentation. Share your learning work (notes) with whoever is tutoring or managing you to invite their ideas. 
  • Active inquiry. Using the software will answer some of your questions. Using the version currently in production to compare with the new version will answer other questions - newly introduced issues will be considered more important. Combine sources and pay attention to discrepancies to evolve your critical observational skills. 
  • Understand scope. What the user has (application, feature) is one thing. What we are changing now is another. Learn to follow code changes in version control. Also follow agreed change work in an issue tracker your work uses but ground yourself on code. Don't care about the details of code (the devs put all their energy there) but read the name and description developers add to the change they make and understand what they are trying to change. Again, anything that just broke or was just introduced defaults to more valuable information for the developers and product owner as information consumers. 
  • Priorities of bugs. Understand what information is considered important by giving information and paying attention to how it is received. Most valuable information of problems results in action - a fix. The likelihood of people wanting to fix come in three flavors: recency (we talked about this), impact on user, and level of issuelessness your project is on. If you can tell the thing you are seeing is bad for someone who matters, we all care, always. And some projects just care about almost everything, but those are rare. When reporting, have conversations on the new things and leave a paper trail (report in your local agreed written practice). Even if they don't fix anything, you will want to show what you're producing when your work is assessed by your manager. This area will change and grow a lot after you have the first hang of it.  
  • Vocab, vocab, vocab. Always move around with pen and paper or equivalent. You hear a word you don't know, write it down and if it feels appropriate for the flow of conversation, ask about it. Google it afterwards. Ask about it afterwards. You are acquiring language, you are responsible for your learning and everyone will help. Use whatever tactic for memorizing work for you on central vocabulary. 
  • Make space for code. Learning about the application one feature at a time lends itself well for you documenting some of your learnings in test automation. If you read a lot of logs, create something that searches and counts things in the logs. If you have a basic flow you'll tend to check, create test automation for it. Find someone to pair to save a lot of time, or take newbie tutorials and apply what you are learning. Code comes with coding. Resist the temptation of not starting on this. It will matter, if for nothing else your increased understanding of software.
  • Invest in learning.  Code isn't your only thing to learn. Create a learning habit. Read and apply ideas. Follow blogs, testers in industry, newsletters, join any slack community. Don't spend too much time there. Sample and apply. 
  • Variables all the way. You started with seeing the functionalities work and getting lucky in seeing them not work. Now, turn up the intention: you are successful in testing when your software fails in meaningful ways you can describe. Make it fail. Start thinking in variables - everything you can do differently and turn that up. You did? More. 
You will want to have track record of results in your first 6 months: what did you learn? what did you find? what tasks did you complete? what did you change in how others think? what documentation (incl. automation) you have to show for it? Your first 6 months is to show you can grow.

The years after that are growing in everything. Choosing timeframes of specialization, volunteering for a mix of tasks you can do and ones that stretch you. This never stops. 

Looking for Doors

I started my post with explaining what you do when a door opens and you get your foot in. What if you are still seeking for that door? 

You can start doing everything I explained you do at your first work without the first work. Create a portfolio of the work you are doing. Portfolio of highlights of your notes. Your test designs. Your test automation. Organize it and make it presentable. 

Whenever you have code, have tests. Whenever you don't have automated tests, have notes of tests. When you create code, learn to create a method instead of a comment, for every if remember the else, and think about incorrect inputs as well. 

If you are getting chances at homework exercises, keep your solutions in your portfolio but don't publish them. 

You can also try working to get either paid a little or for learning from others. The first you might do by finding crowdsourcing projects. The latter you might do by volunteering in open source projects. 

The network you are building can be invaluable in finding the right door. Keep knocking. Balance your time and commitments. 

There's an old post going around saying our industry needs smart people. I agree. That blog post suggest smart means reaching out to gurus writing those blog posts, but I would suggest that may be a sign of growing up in a competitive culture or a degree of privilege leading you to trust that trying is better than not trying. Reach out by all means. But first do some homework. They might have already written a blog full of applicable advice. 

Beep. Twitter message arriving points out a specific tweet of mine as an opening to ask about specific concern, briefly explaining their own thinking. They have been searching for a job and they all seem to want programming. 

Instead of conversation in messages, we agree to talk on a call. We talk a bit, and end up in one of my exercises to add an example to what we talk about on level of programming in starting with testing. One exercise turns into a second in a week, and a few more in series. Exercises, conversations, ideas on how to show their strengths.  Knowing them just a little I offer to recommend them and use me as their reference. 

I sit through a reference interview and explain how they have been learning in exercises. They get two offers at same time and choose their position. We meet a few times on supportive tips on succeeding at their work and move to pull scheduling. I've made a friend, and while teaching, I learned a lot myself. 

We need intelligent people in this industry. People shine when they get the support early on, and then run with it. Talent is equally divided, opportunity is not. 

Saturday, January 1, 2022

Classic Example of Exploratory Testing

As I listened to Lee Hawkins and Simon Tomes conversation calling for "classic example of exploratory testing" and discussing "respecting current norms" when joining projects while still wanting to bring in the "rigorous practice that is deliberate" and "increases product knowledge" and "is valuable beyond added bug reports", I just feel like I need to sort my head my writing about the most recent testing effort I have been thrown at.

A month ago, I started making my transition from one project to another. I'm still figuring out what is the right approach for me as I am joining as a consulting tester with a lot of freedom on choosing what I do (and commanding others on doing things differently). But I have already learned a thing or two:

  • There is a release coming up, and a release behind the team a few months ago. The delta between the releases show that testing of the previous release wasn't particularly huge success because the scope is bug fixes that the customer is requesting. And while I believe we will miss some bugs, the full list does not fit my idea of what good results from testing and fixing would look like.
  • There is an impressive set of automated tests on unit, integration and UI levels fully administered by the developers. The developers even create a listing in English on what things are in the automation so that a non-programming tester can make sense of it. 
  • There is a tester in the team and the tester isn't testing. I have little idea what they do but nothing that resembles testing. Developers test. Developers test and automate. Product owner writes specifications and answers all questions. It takes a better than average tester when you have developers who do well on basic testing and I'm concluding that we may not all be motivated for that task. 
Bad results. Good automation and appearance of testing. Lost tester. 

My first act as a second tester in this team had been to address a communication problem in a style I consider almost a signature move for me. On product owner wanting to reprioritize a fix as "not important", I take up the fixing myself without asking for permission. The whole conversation on "not interesting problem" makes me understand what might have lead to the lost tester I am watching now. The fixing on a completely new codebase takes me a few hours as I find the right place for the simple fix and follow through the pipeline seeing the fix and possible side effects on the final product. 

I dig in a little deeper, into documentation and learn there is two generations of test cases. 

The first generation of test cases follows the format of "System administrator shall be able to view users", with detailed step by step instructions on  how to go about one way of seeing this is true. There's 66 of these tests, and reading them all through takes me 2 hours. No useful information except for one point: some of these test cases describe features that aren't available yet. Someone scoped out functionality, but tests don't reflect that. There is no evidence of anyone ever running these tests, but if it took me hours to read, it has taken someone weeks to write. I recognize I see 4800 more of these styles of tests required from a sister product with a subcontracting company and know I will have a few more hours of work ahead of me. 

I also find a factory acceptance testing procedure that is separate from the development time test cases. Same stuff. No useful information. Another hour of reading through the detailed instructions I could already do deducing from the purpose of the application and user interface. 

The second generation of test cases shows the team had made an effort moving away from the stepwise tests. I find 52 test cases, this time in bullet point list in version control, with markings on which of these tests are (A)utomated and (M)anual. An example test from the list reads as "Protected pages redirect to login page". Again zero information value to me, but at least this generation of documentation isn't trying to tell me to set value 2 and then 5, and leave me already frustrated with the idea that 2—>5 is a completely different scenario than 5—>2 and that NOTHING in any of the documentation hints to this crucial information I had already learned by exploring the application. 

To describe a starting point for the testing I'm about to do, I describe 13 test cases. I can summarize the reminders of all other documentation to 3 test cases, and add the other 10 on perspectives that I was thinking are relevant with the exploratory testing I have done by now. One of the tests reads as "[Feature] Users, roles and logins" and it is all that I write down knowing I can do a 15 minute and 15 hour version of that based on how I perceive the risks. 

I create my usual structures to document my exploratory testing in Jira. Using zephyr and those 13 test cases, I place them into a test plan I title "Pre-release Feature Testing". I know that as I continue exploring, I may change the tests, add more and I know my personal goal for this is to now finally build up the listing of the testing that should be happening, and then do some of it in the schedule available.

I also create another plan I title "RC1 Release Testing", with a single test case: "[Release] Time on Customer Configuration" and decide I will invest 4 hours of my time after we think all other work is done, on exactly what the customer will experience. 

I start my exploratory testing work from outlining a test report, writing down first what changes with this release and how that leads to my assessment of risks. I collect metrics of jira tickets and code commits, and analyze changes that might come from outside. 

I then choose the areas from my listing that I need to learn in order of risks related to changes. I figure out how to control incoming data, and how to access outgoing data in 3rd party systems. I learn how to access every server, every configuration and every log I can find. 

As I test and find problems, I make proposals on *not fixing* the problems by comparing them to what I am learning that matters to the user and what is already in the version the customer has vs. what are newly introduced problems. I know we are on a schedule to finish, and knowing quality is more important than getting quality right now. 

I see catastrophic symptoms of possible regression the good automation is completely missing out on, but instead of just reporting the problem, I investigate the problem by comparing versions identifying the environmental conditions that are true to me now that make the problem visible in both versions. 

I note there is a huge body of requirements and specifications I have not yet read, and note to go back to using that as a checklist of things I may not have considered after I first address what the application itself is telling me about possibilities of variables and scenarios. 

I drive all my actions to me learning the application, the application domain, the architecture, the interfaces, and information we may be missing about the quality in ways that would be actionable with the team. 

This is exploratory testing. It is not ad hoc random time on application seeing if it fails, but it is deliberate, purposeful and investigative. It starts off with light documentation, and it ends with better documentation. And it takes skills to get it done to a good level of results. 

And with results I mean:
  • knowing more of the problems and limitations in action for the application
  • fixing the bugs that matter and deciding on the bugs that don't matter, together
  • scoping the project to schedule success 
  • documentation and test automation that we'll benefit next time around
  • tester knowledgable on the problem domain and team context enabling better collaboration
There's as many stories of how things are done and what of the application leads to insights that provide the right results, and we may need to start telling more of them.