Friday, August 29, 2025

They Code With Me

 As I was preparing for my summer vacation time, I reflected back on the last year. I made an observation: 

The time I used on thinking/discussing concerns on what people don't know of test automation was more than teaching some of what they may be missing would take. 

So I chose to be different.  To live to my exploratory testing values of being intentional, and recognize that time on something is time away from something else. I created a rough listing of what I wanted to teach.

I posted an hour long meeting invite for the next 6 months, with the idea that while we cover new ground I continue. And after I taught programmatic testing with python, I will continue with same effort teaching people contemporary exploratory testing. I suspected I could not teach programming without teaching some contemporary exploratory testing too. I invited our testing specialist community, emphasized that this is entirely optional but in me doing things and explaining, there is a chance they pick up something. 

I have now run four sessions. 

On first one, we learned the conventions around programmatic tests in python, and the tools people would need. We focused on how they would set up test development environments, and how they would know if the basic dependencies work. 

On second one, we learned about tests to dig out patterns from log files, given you already had a log file. 

On third one, we toured REST GET with a glimpse to POST, and differences of asserts and approvals. 

On fourth one, we tried out changing id's in html / javascript files to simulate control over selectors of a webUI application, and created our first test with playwright. We introduced pytest.ini, parametrized tests and giving them names outside the default convention. 

I have two options drafted for our fifth session, and because this is something I choose to do for my beliefs, I choose to follow another one of my beliefs: my energy takes us to a good next step, and I track what we explored already, creating intentional overlap to shine light to core concepts. 

I have received some positive feedback, but I take these stats as positive feedback. People show up when it is scheduled. More people watch the recordings of me coding, making mistakes but also figuring a way out of those - today with sharp eyes of someone in the audience. 


I called my series "Code with me: Programmatic Tests in Python". While I wish they really coded with me allowing people to watch us work on this as pair, I'm still happy I am doing something. 

I started off this post by referring to living to my exploratory testing values. I have put a lot of thought into what those are, for me:

  • Agency
  • Intent
  • Learning
  • Results
  • Opportunity cost
  • Systems thinking
I guess teaching makes a good foundation for learning. After all, we See one, Do one, Teach one. What do you teach to change the community around you? 

Wednesday, August 27, 2025

The Reimagined Tester and How to Grow One

Five years ago we hired a trainee who grew into the kind of tester we need. Polyglot with regards to programming, shifting actively between Python, PHP and TypeScript. Centers automation and ability to do great exploratory testing equally. Collaborative and making an impact far wider than her individual contributor work. She is a career changer, and I suspect I will always admire her drive of learning and combination that emerged from her past experiences and the new things the tech industry made available through work. We worked together about a year, and I have closely followed her growth since. 

Yesterday, quoting Grey's Anatomy she told me: 'See one. Do one. Teach one.' As corny as a life lesson on learning from Grey's Anatomy is, it's a great one-liner to describe the path we shared and the expectation I work with, that she works with.

It's not enough that you do. You need to see: pair and ensemble testing are essential. Seeing is essential. But so is teaching. Reinforcing what you learned by reflecting. Hearing your experience through others learning. 

So today, I refer to her as my prototype for the Reimagined Tester. We have talked about the ideas of contemporary exploratory testing and how the set of skills of a tester isn't either great skill at targeted feedback (testing) and maintaining what we know (test automation) but a great intermix of these two. 

There are more people like us, the Reimagined Testers, the Contemporary Exploratory Testers. But we are a minority in the field of testing. And the more I ask people to test applications and watch them test while pairing, the more I recognize we need a major revamp that goes beyond lip service of saying what we should do is what we are actually doing in the projects. 

The last five years, I have invested a significant chunk of my summer and essentially my free time in growing the Reimagined Testers and figuring out how I could scale that. Because one a year is not enough.

Choices of Growing 2025

Looking back at how this years choices emerged, I see three stages: 

  1. Selection with a homework assignment
  2. Model an application and write test automation
  3. Find what others have missed
I'm writing a longer paper on the first one. The short story is I had people test the To Do App, and chose one that scored best on my schema emphasizing contemporary exploratory testing. That was a definite leading indicator to be able to do 3 when coached, but also a window into possibility of showing start of 2. 
The candidate chosen did well on leading indicators to 3 but left any indicators on 2 out due to time constraints. So this year we learned 2 first. 

For the second one, Modeling an application to write test automation, we made progress on the three months. Got comfortable with Robot Framework syntax and particularly complex selectors for the application under test. Learned about structures and readability. About transferability of test automation from test developer machine to somewhere else. I could can frame it in two ways: 
  • Look, we only got 2 test cases out of three months of effort
  • Look, we got so much learning and also 2 test cases out of three months of effort
In hindsight, I would do some of my own facilitation choices differently. 
  • Make space for 'see one'. I handpicked courses that were good, but courses don't give you feedback when you miss the more subtle teachings. We ended up with more trial and error, and less results because delayed feedback is not the best platform for learning good foundational practices. 
  • If using courses to teach, structure schedule so that course gets completed. Sampling courses to start progress works great when you have solid foundation, but not when you need to build a foundation. 
  • Choose a better tool. Robot Framework was not a good choice. We would have gotten so much more if we used Playwright Typescript. The limited examples online. The hallucinations from GitHub Copilot. The technical limitations to some of the best parts of what is Playwright. They were my choices and they were wrong. 
  • A real team with full time people on same work would be better. But it is not always possible. We don't really have test automation teams, or test teams. 
There were some choices I believe worked, at least I'm happy with: 
  • Introducing a trackable to do -list for feedback on improvements and corrections. It helped making progress and getting the sense of how the work grew as it was being done. 
  • Check ins on progress. Not ones on calendar, but making space for collaboratively look at what was there and where was it heading. 
  • Introducing other helpers, even if some of the help was self-discovered discouraged patterns. Making it so that my availability was not a blocker for its variability.
  • Fixing the codebase and discussing the fixes. While that introduced merge conflicts, we need to learn merge conflicts early on. And we did. 
  • Enforcing 'teach one'. Internal demos. Teaching twice to the internal community. Writing a commit analysis with help of AI to reflect on the outcomes and sharing that with everyone. Essentially, becoming a speaker while a trainee. 
The third stage was a deep end expectations with exploratory testing. With assignment to 'find what others missed and customers have been finding, before the customers' is a classic research formulation of testing. If story-based testing or system test automation were the key, there wouldn't be a gap to fill. Making a summer trainee the piece D in the chain of testing by A and B and C and D so that E would need to find less isn't the testing kiddie pool. Well, 2 months, 19 bugs with 2 critical and a skill of driving 3rd party test data through APIs are all things I have to be happy with. Great foundation for more growth in contemporary exploratory testing. And my main takeaway: "Two years as tester before were entirely different than this testing being asked for right now". They see the Reimagined Tester, in their own words. 

In hindsight, I would do some of my own facilitation choices differently:
  • Teach with exercises. I have them, plenty of them. And we would do better if I taught. Maybe. That is, I taught some bits on need basis while coaching for choices of focus, tasks and priorities. But teaching with exercises would most likely have been helpful. Because here it was even more evident: the course material I wrote and asked to study was never read beyond its start. 
  • Teach meta. Like the fact that I am at the same time manager, consultant and coach and have conflicting ideas with my roles. Clarifying and repeating agreements is an essential skill to teach and I learned this by failing with communication. It's always two people not getting each other. 
There were some choices I believe worked, at least I'm happy with: 
  1. Radical candor. Some feedback I had to give was corrective of nature and it helped we established that I am telling things I see to help them grow. I did not enjoy giving some of the feedback but doing it made the growth. 
  2. Tester to tester coaching. I spent two weeks myself on testing the same system for making a consultant recommendation on the future actions of the team. I learned their test automation and created some of my own. I can come across as knowledgeable now in the business domain, and project status. And I have spend hours hands on with the system. My guidance was not high level, but steps I had taken and would take next if I had time. 
  3. The note taking emphasis. Being able to describe daily insights. Improving discussing results with coaching. While we agreed on leaving them public, they turned private as soon as I stepped out, but they existed. And they were fodder for genAI on generating test ideas. 
  4. The automation insistence. Automation almost got dropped out even though it was essential to be able to complete the mission: find what others miss. Without insistence, severely limited ability to test through GUIs would have won over and it would not be right. 
In the end, I am reflecting my own choices because I might be at a crossroads. I am still figuring out if we can continue our common learning journey as I expect or if the state of the world means my next trainee is someone who is a career changer from traditional tester to a reimagined tester. I want to believe it might be both, but I need to figure out scale. One by one won't fly. 


Everyone would need this attention. And it is sad people did not get it, and ended up not learning all this stuff they should know.

Schools really leave a gap. Going schools 20 years ago and then leaving all your training to your employer left a huge gap. Relying on old-school ISTQB did not bridge the gap but widened it. 

I suspect we live at final times of changing direction for the tester profession. Time will tell, and today I was "lead developer" rather than a tester. But always, always, a tester at heart. <3 


Tuesday, August 26, 2025

Revisiting EPrimer, Three Times Over

In 2021, I create a course I called Exploratory Testing Foundations. I posted it on dev.to, then emerging platform. Since then, 9527 people have viewed it. I've taught a lot of people with it, even if not in its full theory form.  

Since then, I have come to realize that the style of foundations I teach is somewhat different from the style of foundations some other folks teach, and dubbed by style of teaching as contemporary exploratory testing. Most essential difference on this course comes from the fact that I insist on automation as documentation for exploratory testing. 


The application is super simple, basically a text field, button and three number values. All of those can be accessed with an automation script we can write in less than half an hour in an ensemble with complete programming newbies. On some versions of this course, I have regretted this design choice for my teaching, as some people pick only that idea up, and run with it. Yet, it serves as a wake up call for people who are incorrect ideas about exploratory testing the approach. 

In the last week, this little gem of an exercise has seen re-emergence in facilitated learning. Two times in sessions I facilitated, and once in a session that someone I have taught a lot did at their place of work. 

Format 1. Product development testers crowd pair testing for 1.5 hours

The first emergence of this exercise was with a crowd of testers. I asked one of the crowd to be the volunteer for hands of the crowd, and had no intention of introducing the full dynamics of ensemble testing (with designated navigator to act as voice, and rotation of roles). I just wanted a pair for myself, when I channeled the crowd to decisions on what we would test. 

I started off with opening the application, and asking what the crowd proposes we would do first. Proposals included random piece of text from hands on keyboard briefly without meaning, special characters, empty field, long text by longer rest of hands on keyboard, and finally, a meaningful sentence: "to be or not to be". 

The crowd did well to illustrate that all too often we testers start with the weird and the eccentric, even without ever having seen it work. So I picked the final proposal to execute, to see it work and we typed "to be or not to be". 

Then we stopped to discuss what we learned. We learned about red color, and a category of counting we had no clue of. 

I then pushed for documenting what we had just tested. I opened a template test case I had created with Robot Framework, and we documented our test: 

to be or not to be   6   2   0

I then helped us have a clue we previously missed and we typed "to be or not to be is hamlet's dilemma". And addressed on the idea of how easy it is when you know, and how I discovered that information myself in the first place: serendipity, but also pushing my luck to being likely to have it. We also tried my usual demo sentence is "to be or not to be - hamlet's dilemma" because that shows a bug. Small difference, but intentional to reveal extra information. 

I introduced the idea of a paper with invisible ink, and our job to turn the ink visible and that I was aware of 28 issues with it. Turns out that recounting the list on my phone, it was 27 and my memory was off by one. I remembered, however, the list started off with 22 items, and I was hoping catch-22 would remain. But I also discovered one cross-browser issue in 10 minutes to prep for this session. 

We then searched for a bug that I had *just* discovered on my machine where with the right conditions, it worked correctly on one browser (safari) and incorrectly on another (chrome). Turns out later though that it works incorrectly on both browsers if the browsers have been fully refreshed / closed. I did refresh on the day but clearly had a different version of styles for the two browsers to show a bug that would need more steps to reproduce. 

There was very little testing going on, and a lot of discussion about choices we are making and being intentional. 

I showed a few slides. I discussed the traps. I showed the thinking quadrant model. I showed the test strategy written after testing, for helping future me to test. And I showed the three categories of time to track. 

Format 2. Product development SME-testers demoing two cases to anchor teaching for 1 hour

The second emergence was with a group of subject matter experts seeking guidance and inspiration on how to frame their "business process testing". This time the exercise served as a demo, and I showed two cases. 

I showed the "To be or not to be - Hamlet's dilemma". And I showed a wikipedia listing of "I'm you're they're ...". We elaborated how my choices were intentionally relevant for the domain, and that while the domain in this application was weirdly proper English, their own domain is something they are experts in and should apply that expertise together with learning when they test. 

Our conversations were around the idea of lower levels of testing and how they should be able to just accept. So we talked about finding problems that we thought lower levels of testing should have found, and giving that as feedback to the lower levels of testing. We talked about opportunity cost, and choices of use of time. Shortcuts we could employ to center the value expected of us. 

The crowd was so lovely with their compliments. They commented on clarity of explaining something and making it clear for them. 

The choice of slides was a little different here. I showed outputs and inputs of exploratory testing. And I showed the thinking quadrants model. 

Format 3. Product development testers ensemble testing for 2.5 hours

The third format was creation inspired by me going back to this exercise, with someone who had history on the exercise when I first created all the materials around it. They tell me they run the exercise today, with framing it to include automation as documentation and choices of first methods of running. 

They found a bug that I had not run into before! In this one, the count of words is smaller than the count of  possible violations. 

They also report their crowd started with seeing how it works, giving me more hope for testerkind.  

The Catch-29

I realized I have not posted the notes of what bugs there are to find, so I guess it's time to make the listing available for kinds of ChatGPT with a search integrated, by posting it to a blog that is established enough to come out of search. One day I will clean the list up, but posting it here for the convenience of whoever searches for it. 

  • Css validator gives errors
  • Special character as start of line forces an extra line change in grey display box
  • Firefox text field does not clear with ctrl+R
  • Safari allows for scrolling when the scroll bug is reintroduced unless the browser is restarted
  • html validator identifies 3 errors
  • You're / we're / They're contractions not recognised as violations of e-prime
  • Two words separated by line feed are counted as one
  • Human being is noun but recognised as violation
  • Space is considered only separator for words and special characters are counted as words
  • Long text moves button outside user's access as vertical scroll is disabled
  • id naming is inconsistent, some are camel case, others not
  • Long texts without spaces go outside the grey area reserved for displaying the texts
  • Red/blue on grey has bad contrast
  • Zoom or resize of browser renders page unusable due to missing scroll bars
  • Contractions for word count (I'm) count as two words as per general searchable rules of how word counting works
  • The possible violation's category takes possessives and leaves for human assessment and would probably be expected to be something to create programmatic rules on
  • Possible violations does not handle typesetter's apostrophe, only typewriter's apostrophe in calculation
  • Two part words (like people's last names) in possessive form are not recognised as possible violations
  • Images missing alt text necessary for accessibility
  • Accessibility warnings on contrast
  • Mobile use not supported, styles very non-responsive
  • UI instructions for user are unclear
  • if word is in single quotes, it is not properly recognised as e-prime.
  • text box location in UI is not where user would expect it to be as per the logic of how web pages are usually operating
  • Site is missing favicon and security.txt - both common conventions for web applications
  • Resizing the input text field can move it outside view so that it cannot be resized back
  • Choosing which links are to overload this app and which open new browser window are inconsistent
  • The terminology of discouraged / violations would be clearer if consistent terminology, e.g. discouraged words and possibly discouraged words
  • Writing two possible violations together separated by full stop counts less words than violations. The counting logic does not match.
  • <html>be<html> ? 

Friday, August 22, 2025

Learning programming through osmosis

I identify mostly as a non-programmer. Yet, two weeks into a new job I’m already learning and contributing to Python and C++ -code. The method that enables me to do this is ensemble programming, the idea of having a group of people working together on one computer on a task, taking turns on who types for the team while others instruct. For an idea to get from one’s head to get to the computer, it flows through someone else’s hands. 


This article shares key insights from my journey over a little over a year on learning programming through osmosis, just being around programmers working on code, without intention of learning. As a result of learning, I rewrote my history with things I had forgotten and dismissed from my past. I hope it serves as an inspiration for programmers to invite non-programmers to learn to code one layer at a time, immersed in the experience of creating software together to transform the ability to deliver. Lessons specific to skillsets get transferred both ways, and while I learn from others, they learn from me, leaving everyone better off after the experience. 


Finding Ensemble Programming


Many different roles contribute to building software: product owners, business specialists, and testers, yet, knowledge of programming keeps these roles at a distance. I did not come to programming through wanting to program or taking courses on it but through working with programmers in a style called ensemble programming. 


As a tester within my team of nine developers, it was clear I was different. I wasn’t particularly keen on learning programming since there was more than plenty of work in the feedback through empirical evidence and exploration that is my specialty I’ve developed in depth over two  decades. I’m an excellent exploratory tester and my team’s developers have always been my friends with a pickup truck that I can call in for assistance on anything where code needs to be created. Besides being the only non-programmer, I was also the only woman and part of a team, where some people would occasionally spout out things like “Women only write comments in code.” Not exactly an inviting starting position. 


Although I did not like programming, my hobbies that started at the age of twelve and my computer science studies, that further killed my interest in programming, I  had acquired experience in coding twelve different languages. I started making small changes in how I looked at programming in a different light for my daughter’s sake, as I did not want to transfer my dislike of code to a 7-year old about to be embedded in an elementary learning environment where programming is everywhere as programming is a mandatory part of Finnish curriculum now. 


The real change, however,  started with Woody Zuill’s talk in a conference I organized. Woody is the discoverer of ensemble programming. The idea of the whole team working on a single task, all together on one computer just sounded ridiculous, yet as ridiculous as it seemed, I thought it could be a way for my team to learn from one another as well as create team building. Instead of taking someone else’s word on methods, I have a preference on experiencing them first hand. And it wasn’t like we had to commit for a lifetime, just to try it out once or twice.


The First Experience Expands


With some discussions, my team agreed to try it out, but I knew I would be out of my comfort zone since I would have to be in front of a computer working on code. Our first task was to refactor some of our code with Extract Method and Rename automatic refactorings and we had an experienced ensemble facilitator lead the session for us. While not on the keyboard, I found myself able to comment on the names from the domain, and while on the keyboard, I noticed with each round that I was picking up things: keyboard shortcuts, ways to navigate, programming concepts without anyone really explaining them to me when the work was being done. In the retrospective, I could reflect on my learning and realized that not only was I picking up things I did not know before, everyone else was doing that too. 


I felt safe in a group, as I did not need to be fully paying attention to every detail at any time, and I was always supported by a group. Surprisingly, the expected negative remarks on gender did not come out in a group, whereas they would be a regular thing in a more private pairing setting. 


From that first experience, my team extended this to a weekly learning activity. I took the mechanism of learning for myself further, organizing various ensemble programming sessions with the programming community on different programming techniques and languages, learning e.g TDD and working with legacy code in a hands-on manner. I introduced my team to ensembling on my work, exploratory testing and they learned to better identify problems. In our ensemble programming sessions, there were several occasions where my existence in the room fixed an expensive mistake about to happen from half a sentence of discussion. Finding a problem like this early on led to more efficient and productive work for everyone. Although it seems inefficient to have so many people working on one thing at the same time, the saved time in avoiding context switching, passing feedback back and forth, increased focus on steps to complete together with great quality,  as well as learning made us develop much faster and with less future problems.  


Joining An All Female Hackathon


I took the idea of ensemble programming to a weekend hackathon outside work and convinced my fellow teammates to try it out, but only three people decided to be involved out of four.I avoided setting the expectations of me being a non-programmer and just joined in with whatever programming skills I had, without disclaimers. There was even a woman participating with less coding experience with, as she had never even looked at code before. 


Out of that weekend, I came out with four major realizations:

  • The best programmer outside the ensemble only contributed graphics. In the ensemble, we were adding one feature at a time and committing regularly, and the senior programmer found it hard not to have modules of her own to work.  There was no long-term plan for incrementally developed software and the version kept changing under her. We tried summarizing the lessons on the used technology for her, but she kept hitting problems that blocked her. 

  • I passed off as a programmer. No one noticed I was not a programmer. And the reason was that I had become one. I realized that programming is like writing. Getting started is easy, and it takes a lifetime to get good at. 

  • The non-programmer felt like an equal contributor. Her experience was that the code created was just as much hers as any of the others and that is a powerful experience. She learned the basics with us through typing for us, and reflecting with us. 

  • We had working software. Not all groups had the same luxury. In the ensemble, we had the discipline to have not just code, but working code to a scope that could vary depending on how much time we had to add more functionality. 


My Main Lessons


Cognitive dissonance is a powerful tool


The experiences of working with a ensemble for over six months transformed how I perceived myself. No amount of convincing and rational arguments on how much fun programming is could have done that. When my actions and beliefs are not in sync, my beliefs change. And that is what ensemble programming did to me. It made me a programmer, through osmosis, and got me started on a long journey of always getting better at it. 


Non-programmers have a lot to contribute


I saw that while I was learning a lot, I was also contributing. As a tester, I had information about intents of the users that seemed mysterious to my programmer colleagues. We would test better while programming, just because I was there. We would avoid mistakes that were about to happen, just because I was there. I could give feedback without egos in play, and we could all learn skills from one another. And even me being slow was a positive thing - it made the other programmers more deliberate and thoughtful in their actions, and they shared the realization that they created better code while slower. I ended up feeling really proud of how much better my developers learned to test with our shared ensembling time. 


Team got out a lot


I wasn’t the only one who learned - everyone in the team picked up different things. It was a pleasure to see how abilities to add unit or selenium tests expanded from individual to a team skillset, and how many times we found better libraries because just one of us was aware of it. 


We slowly moved from working on technical debt and cleaning up to a shared standard to having technical assets in the form of libraries that would enable us to do things faster. 


Everyone got their voices into the code better. We worked with the rule that if we had several ideas of how a problem could be approached, we would do both over arguing while we had the least practical information about how it would turn out. And it was surprising to notice that something that someone would fight to the bitter end with, was good enough to accept after the implementation was available, and not just because people would lower their standards. 


We also learned that when one of us did not feel like contributing in a ensemble format at first, it was a good idea to let one opt-out. The party-like nature of the sessions and the evidence of the rest of us bonding and learning inevitably drew these non-participators back in on their own initiative later on.   


Ensemble Programming as a Practical tool of Diversity


Ensemble programming is a great way of introducing new people to programming, or testing for that matter. It transfers a lot of the tacit knowledge otherwise difficult to share. It brings the best of us to the work we do, as opposed to the most of each individual. While working together, we can remove a lot of the rework with fast and timely feedback. We raise our collective competence, allowing individuals to use specialized skills. We used a rule “learning or contributing” to give a great guideline in thinking of when a ensemble is doing what it is supposed to. 


As software is such a big part of our society’s present and future, we need all hands on deck in creating it. We need to find ways of bridging roles without telling others that everyone just needs to be a programmer. In a ensemble format, I learned that while I picked up my hidden interest in programming, I would have been a valuable contributor even without it. There was a struggle for both me to go do things I thought I wouldn’t enjoy and the team to work in a setting they were not used to. It was worth the struggle to remove the distance I previously felt between myself and the programmers. 


Just adding more women and people of color to the field of software development isn’t enough if the people struggle to get their voices included.  We need to do more than make the world of coding look diverse. With ensemble programming we can use that diversity to innovate the world of coding overall. (Props on this thought to Kelly Furness, who was in the audience with my DevOxxUK talk) 


It’s not just learning programming by osmosis, but the learning is mutual. Give it a chance.


I wrote this for VOXXED on April 13th 2020. VOXXED has been taken offline since so I repost this on my blog.

 

An hour and a half of intentional contemporary exploratory testing

Thank you for a CGI product development team on making space for us to have a 1.5 hrs learning session on exploratory testing in your team day. We did not test our own products, but we tested a test exercise, and we tested with intention.



First, we brainstormed what we would do first on this specific application. Would we just click on the button with an empty value? Would we emphatically rest our hands on the keyboard to generate some text without meaning? Would we do a short sentence or a wall of text? Would we try special characters and only special characters? Or something different?


We then talked about our options. We could choose to read the wikipedia page to figure out what is this e-prime anyway. We could design from our experiences a sentence that is with, or without the verb "to be" as we read what is available on the screen. Someone then proposed something I could agree with: "Let's type a bit of Shakespeare" and 'To be or not to be, that is the question' was chosen as our first executed test as a group.


Immediately after concluding it works, we documented that in automation, run the automation and confirmed we now had executable documentation. We also saw automation fail, just to gain trust on our tools.


Now we had the choice: we could list our other inputs to this text field directly into automation. Or we could attend to the user interface first, and document after. This time we chose the latter.


We then got to me pointing out there is a bug I know of, where it works on Safari but it does not work on Chrome. Brainstorming what could break with browsers concluded as we don't really know. It's easy to know after, but it is hard to know before having tested. If you feel like trying, it's long text that will make the bug visible.


With 28 known issues, we found five. We slowed our decisions to snail pace, so that we could all move together and make deliberate, intentional choices. It's a whole day course for me to teach how to find all the 28 that I am aware of.


The assignment for #ContemporaryExploratoryTesting is to make the paper with invisible ink listing of bugs visible, focusing on information that is relevant. The scribblings may include things that are just noise and we need to intentionally steer away from. Are your testing results intentional or accidental?


Like someone from the audience concluded: they don't teach testing like this in school and they should.


If your testers need to learn to test, I know how to teach this. We know nowadays wider at CGI how to teach this. And asking for better testing is growing as I'm here for #scale.

I wrote this first into my LinkedIn posts queue on socialchamp that I am now experimenting with, to realize this was a moment where it was already a blog post size. Creating content for LinkedIn should really not be my thing so I write it here instead.

Friday, August 1, 2025

Bug advocacy is to go beyond reporting

 I've been fairly non-committal with my talks and topics, because I am interested and experienced in more than one thing. One talk that I am particularly happy with still this day is one I delivered as keynote at DDD Europe on topic of 'Breaking Illusions with Testing'.

The point I was making is that there are more illusions than just the usual "does it work if you did not see it work", where we can imagine a thing being operational while no one uses it, or says that they used it and it didn't work.


The talk grew from paraphrasing a community phrase to a personal response to a developer, and then many developers in crafter community. The original phrase was not only breaking illusions but also trashing delusions, a bit more attack-oriented than the insight in what could be expected of me, professionally.


Remembering the illusions are more than just bugs in the traditional sense and systems are sociotechnical including people and organizations, I still test applications but also organizations and unfortunately, people's patience in questioning the beliefs we (including me) hold dear.


A particular belief I find worth writing on today is the idea that testers *report* issues and it is someone else's job to do something about it. For years I have found this a core of what I consider bad advice. There is a lot more variety to your choices on what you could do when you see it.



If you could just sort it, like fix the bug, it would save up time. If you wanted to educate while fixing it, you could pair up to fix it. You could leave it to make space for more important bugs, and leaving things be and choosing your battles has been the core point of what I find I have needed to learn. I choose a thing to fix, I choose things to leave, and I choose things to say.


More recently, I have also observed patterns on framing things when saying it. I must have become a manager, because the saying it where solution is you yourself escaping the problem assuming grass is greener on the other side just feels like we give up when we should rally together, and collaborate.


For me, a major source of energy is what I call spite. I recognize myself annoyed for a good reason, collect that energy and do something that looks just like me as a positive action to change the world just a bit.


No women of merit in speaking? I do 50 keynotes and speak in 28 countries.

Poor contents on ISTQB Foundations? I volunteer for 6 months to rewrite, only to learn that some parts of that rewrite are lost in lack of intentional version control.

No training available? I teach what I can, encouraging community teaching, and learn ways to go around the corporate constraints.


We have agency, we have choices. And we need more fine-grained advice for testers than encouraging to stop at reporting the problem.


I wrote this first into my LinkedIn posts queue on socialchamp that I am now experimenting with, to realize this was a moment where it was already a blog post size. Creating content for LinkedIn should really not be my thing so I write it here instead.

Thursday, July 31, 2025

Code Reviews Have Already Changed

Reading just about anything these days, your mind is filled with doubt: is the thing I am reading written by a person or generated with AI (from a prompt of a person). And that matters because if writing takes 1 minutes, reading it 600 times for 10 second takes 100 minutes. Producing text can be automated. World has changed that way.

Reading was the problem already before generative AI. In my TSQA talk from 2022 'Something in the way we test', I was addressing the ridiculous notion of writing 5000 test cases or spending 11 working days just reading them through. In the upcoming 2 years after that I learned over and over again that there really was no relevant answers to any of the realistic queries we had with business representatives captured in the 5000 test cases I chose to put to the side. 

Reading is an even more significant problem now with generative AI. That makes reviewing before making people read things more essential than ever. 

Six months ago I started drafting a talk (without a stage for it to be presented) with the title: 
Code Reviews Have Already Changed

This was a talk that built on the TSQA talk, with a genAI perspective of recent experiences and a call for action in really learning to do what I had another talk formulating experiences with:

RAGified Exploratory Testing Notetaking

This talk was built on years of experiences of taking notes, and how those were supercharged when using them with genAI in RAG-framing. 

During summer break I came to realize that I don't have need of doing talks, so I might just as well write selected pieces into my blog.  

So here are my two selected pieces: 

1) Selenium open source project year of CodiumAI / Qodo

While active as a member of project leadership group for Selenium, I had fun watching dynamics that I wanted to go back to with research perspective. That is, an AI review assistant was in place, and it had fascinating impacts. 

Dehumanized feedback was easier to dismiss - emotion management for giving feedback in code reviews is a real thing, and having genAI generate code reviews generated a stream of silent dismissals UNTIL it finds something relevant that reveals people read. The combo of PRs and chats provides a fascinating record of this, showing that most feedback does not warrant a reaction and is clearly easier to dismiss than real people's review. 

Simultaneously, you could see AI lowering the bar for people to try contributing without always knowing what they were doing. Some open source projects went as far as refusing AI-assisted contributions. Others, like Selenium, saw an increased load of attending to people's emotions when they would get feedback from reviews. 

Code reviews have changed: 

  • There is more of them to do, and the first reviewer post AI does not seem to exercise sufficient critical thinking with context
  • Knowing something is AI-generated is valuable information to save time on emotion management labour
2) Ownership of generated code / text is hard at work

A colleague in-between-projects was working with me on a proof of concept on a particular combo of platform product + test automation. People in between projects have a lot of time and focus, while those of us in projects have other things going on. Instead of my assistance, genAI was present for a month. 

When the day came that I reviewed the code, result was deleting most of it as unnecessary. There same capabilities were already available, and you'd know if you read the first page of the tool tutorial. 

"GenAI made me do it" was poor excuse for not reading a definitive tutorial and creating something to delete for a month. 

Similarly, more recently, someone worked a week before I was available. On the week I was available, I spent 2 days reading what had been written and trying to preserve some of the work coming before mine, with a lot of effort. Today I learned that I had spent 2 days preserving AI-generated text because you just don't throw away a reported week of someone else's work. 

Ownership has changed:
  • Your agency is essential. Accepting delays in feedback still causes trouble. 
  • Knowing your text is AI-generated (include prompts) would be more helpful way of contributing to next steps than generating the text. Protecting feelings costs time. 
  • Generated text, with knowledge of it being generated, acts as external imagination and sets you up to compete to do better. 
If I took the time to put this together into the story I would tell on stage, it would still make a great talk. For that to happen, someone needs to do the work of inviting me because I will not do the work of participating in calls for proposals. And I won't pay to speak.