Thursday, December 19, 2024

Purposeful and realistic return on investment calculations

It's a time of the year that allows us all an excuse to message people we usually don't. I had a group like that in mind: managers of all of my tester colleagues at work. There's two reasons why I usually don't message them:

  1. We coexist in the same pool as peers, but we have no official connection 
  2. There is no list of them easily accessible
You can't create a connection with your peers without being intentional about it. Having now tracked through coverage of fellow customer contact people (200+ by now), I needed a new dimension to track. This is just one of them, although an important one. 

Yet still, you can't contact them if you don't know who they are. With large number of test professionals, asking around won't work. And the manual approach of tracking through organization chart is probably a lot of work, even when the data is available. 

For six months, I have accepted that such list does not exist. Call it holiday spirit or something, but that response was no longer acceptable to me. So I set out to explore what the answer is. 

Writing a script can be one of those infamous "10 minute tasks", but for purposes of discussing return on investment, let's say it takes 300 minutes. Maybe it is because originally I wanted to create a list of test professionals and automatically update it to our email list because I hate excluding new joiners, but since I am exploring, I never forgot what I set out to do, but I may have ended up with more than I had in mind at first. I take pride in my brand of discipline, which is optimizing for results and learning, not to a commitment made at a time when I knew the least -- anything past this moment in time. 

So, I invested 300 minutes to replace 299 minutes of manual work. Now with that investment, I am able to create my two lists (test professionals and their managers) investing 1 minute of attention on running the script (and entering 2FA response). How much time is this saving, really?

I could say I would run this script now monthly, because that is the cadence of people joining. Honestly though, quarterly updates are much more likely because of realities of the world. I will work with the optimistic schedule though, because 12x299 is so much more than 4x299. Purposeful return on investment calculations for the win! 

You may have done this too. You take the formula: 

You enter in your values and do the math (obvs. with a tool that creates you a perfect illustration to screenshot). 

And there you have it. 1096% return on investment, it was so worth it. 

The managers got their electronic Christmas card. I got their attention to congratulate them on being caretakers of brilliant group of professionals. I could sneak in whatever message I had in mind. Comparing cost and savings like this, purposeful investment calculations, its such a cheap trick. 

It does not really address the value. I may feel the value of 300 minutes today, but is there a real repeat value? It remains to be seen. 

That is not all that we purposefully do with our investment calculation stories. I also failed to introduce you to the three things I did not do. 

  1. Automating authentication. 2FA is mean. So I left it manual. Within this week we have been looking at 2FA on two Macs where automating it works on one and not on other. I was not ready to explore that rabbit hole right now. Without it, the script is manual even if it runs with 1 minute attended and 15 minutes unattended. 
  2. Automating triggering it with a pipeline. It's on my machine. It needs me to start it. While pushing the code to a repo would give access to others, telling them about it is another thing. And while it works on my machine, explaining how to make it work on their machine given the differences of machines and knowledge levels of the relevant people, this should really be just running on a cadence. 
  3. Maintainability. While this all makes sense to me today, I am not certain the future me will appreciate how I left things. No readme. Decent structure. Self-documenting choices of selectors, mostly. But a mental list of things I would do better if I was to leave this behind. I would be dishonest saying this is what I mean as work when I set out to leave automation behind. 
When you factor all of that in, there is probably a day or a week of work more. Week because I have no access to pipelines for these kinds of things, and no one can estimate how quickly I will navigate through the necessary relationship building to find a budget (because cloud costs money), gain access and the sort. 

The extra day brings my ROI to 380%. A week extra brings me to: 

Now that's a realistic return on investment calculation, even if it still misses the idea that this would could be not done, not repeated if it was not valuable. And the jury is still out on that.

This story is, most definitely, inspired by facing the balance of purposeful and realistic often for work. And I commit to realistic. Matter of fact, I am creating myself a better approach to realistic, because automation is great but false expectations just set us out to failure. 



Thursday, December 5, 2024

A Six Month Review

That time of the year when you do reflections is at hand. It's not the end of year yet, even if that is close. But a few relevant events in cadence of reflections culminated yesterday and today, leading me to writing for a seasoned tester's crystal ball. 

The three relevant events are all worth a small celebration: 

  • Yesterday was first day after trial period at CGI as Director specializing in testing services and AI in application testing. We are both still very happy with each other. I should say it more clearly - I love the work CGI built for me, and I love that I get to bring in more people to do that work with me also next year. CGI's value-based positioning as an organization that supports volunteering and great community of testing professionals (testers, developers, product owners and managers/directors throughout the organization showing up for testing) has been a treat. 
  • Today the TiVi ICT 100 Most Influential list was published just for Finnish Independence Day, and I found my name on the list for 6th year in a row. You can imagine there are more than 100 brilliant professionals influencing in the ICT in Finland, and Finland has had a good reputation of educated ICT professionals with internationally competitive prices meaning that for a small country, we have a lot of brilliant people in ICT. Representing testing on this list even with title 'Director', is a recognition for all the great people teaching each other continuously in the community. Testing belongs. 
  • Today we received positive news of one frame agreement bid I had been contributing to in my 6 months at CGI. Building a series of successes in continuing as a partner of choice with me around brings me joy. 
While these things are the triggers of reflection today, it is also a good time to take a look at themes of testing coming my way. 

Distributing Future Evenly

When searching for work with a purpose, I welcomed the opportunity to put together themes of relevance
  • Impact at Scale. I believe we need to find ways of moving many of our projects to better places for quality and productivity. Many organizations share similar problems and there has to be ways of creating common improvement roadmaps, stepping plans to get through the changes, and seeing software development (including testing) with increased successes. Scale means across organizations, but also over time when people change. 
  • Software of Relevance. I wanted to work on software that I feel connection with. I have been awarded multiple customers with purposes of relevance and feel grateful for the opportunities. 
Turns out this is a combination of conversations internally with people I work with, our current and potential customers, and the testers community at large. 

I have met over 150 peers at CGI in 1-on-1 settings (I tracked statistics for first four months and reacting 160, I decided counting something else would make sense). I have enjoyed showing up with our internal Testing Community of Practice in Finland, and started to create those internal connections globally. I have learned to love the networked communication expectation where hierarchy plays a small role. I've done monthly sessions for external testing community of practice Ohjelmistotestaus ry with broadcast of topics I work on. And I have been invited to conversations with many clients. 

AI in Application Testing

With testing as my backdrop living up to a reputation of 'walking testing dictionary', there is a big change into the future we have with increased abilities in automation with AI helping with quality and productivity.

I have looked at tens of tools with AI in them. For unit testing, for test automation programming, and for any and all tasks in the space of testing. From exploring these tools, I conclude a need of seeing through the vocabulary to make smart choices. With models as a service, it's what we build around those models that matter. The technical guardrails for filtering inputs and outputs. The logic of optimizing flows. The integrations that intertwine AI to where we are already working. 

In these 6 months, I created a course I teach with Tivia ry - 'Apply AI of Today on Your Testing'. Tivia sets up public classroom versions, and both they and CGI directly are happy to set up organization specific ones. 

The things that I personally find exciting in this space in the last six months are: 
  • Hosted models. Setting up possibility to have models on your own servers (ahem, personal computers too) so that I can let go of modeling what my data tell about me or reveals from my work. 
  • Open source progress. Be it Hercules for agents turning Gherkin to test results, or any of the many libraries in the Selenium ecosystem showing proofs of concepts on what integrations are like, those are invaluable. 
  • Customer Zero access. Having CGI be a product company allows us to apply things. Combine the motivation and means, and I have learned a lot. And yes, CGI has a lot of products. Including CGI NAVI, which is software development artifact generation product. 
Digital Legacy

The time focusing on scaling testing has warranted me revisiting my digital legacy. From being able to create AI-Maaret by having 20 years of my thinking on written formats, to making choices of how I would  increase the access to my materials for everyone's benefit has been a continuous theme. 

From 50 labels of courses in Software Testing, I brought things down to only 18. 


Should testing expertise be a capability you are building, we could help with that. We're currently prioritizing which of these we teach for CGI testing professionals next year. In addition to all the other training material we already have had without my digital legacy. 

Purpose-wise, I am now moving from having access to this myself to institutionalizing the access. And that means changing license for next generations of this from CC-BY to CC-BY-NC-SA, and you could always purchase licenses that allow other uses too. 

Collaboration and Sharing

With tentacles in the network towards all kinds of parties, I still facilitate collaboration. My personal vision is frame a lot of work with open sharing, this being my current favored quote.


The tide is something we create together. And we want to go far. Sharing openly is a platform for that collaboration so you see me sharing on Linkedin, Mastodon and now also Bluesky. My DMs on these are open. 

Networking

With open DMs and email (maaret.pyhajarvi (at) cgi . com), I am available for conversations. I have targets set on networking and should you have a thing we can discuss for seeking mutual benefits, don't be a stranger and get in touch. 

I am open to showing up in events, but I am still not proposing talks in CfPs. Please pull if I can help, and I help where I can. 

I show up for Tivia ry to take forward the agenda of Software in Finland. I show up for Ohjelmistotestaus ry to build a cross-company platform on testing. I show up for Mimmit Koodaa in any way I can, because that program is the best thing we've had for many many years. And I show up for Selenium, as volunteer in the Selenium Project Leadership Committee. I show up for FroGSConf open space as participant because it is brilliant. I show up for private benchmarking, currently with three groups of seasoned, brilliant professionals driving a global change in testing through supporting one another. 

As new thing, I have discovered "Johtoryhmä", women in ICT leadership in Finland. Will show up for that too. 

And when we meet, please talk to me. I am still socially awkward extrovert who imagines people don't  automatically want to talk to me. You talking to me helps me. And our conversations may help us both. 

Saturday, November 30, 2024

You are enough

Sometimes there is just too much going on. Too many self-volunteered tasks and deadlines, some more visible than others. And you find yourself doing a lot. Feeling in control though, you chuck through the pile recognizing there is no progress if you juggle the load, filled with anxiety. Finding that sense of agency, you do what is in your power to do. That sense of power, it is essential. 

I come to think of this for multiple triggers in this space in the last weeks: 

  • Firehosing information
  • Exercising replan
  • Managing anxiety in others
With the four stories to share, I ground myself to the purpose I have blog in the first place. Not to write the perfect articles. But to write down things to reflect. Usefulness of my reflections to others remains with the  discretion of the reader. It's different to what majority of bloggers do, but then again, not everyone frames their blog as 'seasoned tester's crystal ball'. 

Firehosing information

I know a thing or two. I learn more by building a foundation of what I know by sharing it, and have built a bit of an identity in reflection and proving myself wrong. I take pride in the 180 turns of opinions I can recognize on my work. The attitude towards whether continuous integration is a good idea. The idea of what test cases should look like. The idea about separation of concerns to developers and testers. The attitude towards automation. I have written evidence that I learned and changed views. It used to scare me enough to not say things of today. But I learned that I am enough today, even when I am not enough as today for tomorrow. 

I have needed to tap into this lesson a lot now that I have a lot of new colleagues, with a lot of those learnings I have needed to lead myself through over the years. I have needed to remember that I did not change my perspective because I was told what was right. I changed my perspective because I observed myself the options, and had agency in making those changes to my internal model of the world. 

I have a lot to say. And I moderate on how much of it I say. I have decided now that I say one piece from stage a month with audience of my colleagues and anyone in the Finnish community, through the platform that Software Testing Finland (Ohjelmistotestaus ry) offers. And I hold space for conversations with my colleagues on leading and managing testing twice a month. Once a day I can post on our internal channel to share something. Once a day I can post on LinkedIn to share something. And on Mastodon / Bluesky, I can say what I want to say when I want to say it. 

I write and say more in a week than others can consume. Sharing is an outlet, and a processing method. 

November was a month of firehosing. I did seven new talks. One because of my choices of cadence, but 6 others because someone pulled and asked for them. Doing my tally of stage presence a month early was an act of offloading. 

It's been hard to remember that the one guy giving me feedback last year everything I ever say is shit, and the other guy giving me feedback this year that "quantity is not quality", just in case I did not already deal with enough negative internal self-talk, they aren't the full picture. I am enough. You are enough. It is true for us all, without comparison to others. 

Exercising replan

While my head needs an outlet of information through sharing, the visible parts of what I do aren't all I do. Life has been a lot, work has been a lot. 

I found myself in a situation where I had so many competing tasks to complete that I couldn't. 

I couldn't deliver a report to a customer that I promised. So I told the customer, and took a week of extra time. Turns out the appreciated the confession of a consultant overestimating the pace in which they can analyze a complex situation. 

I could not find time to talk to half a dozen people about test automation I wanted to. I didn't, and while they may not forgive me, I forgave myself by letting them know what I learned while I could not show up for them. 

I have still one more thing on replanning, and I am balancing the cost of replanning on that one. Me doing it might just be easier than me replanning it for someone else, we've already been through two bounces back to me. 

Ending up with too much requires replan. Requires confessing need of help, need of time and space, and support. Remembering I am enough even when I am not enough for all the things I may end up with. 

Managing anxiety in others

Being a leader is about having people who follow you, sometimes with positional power but sometimes just because they were in search of someone with ideas. I still call myself a regretful manager, because I don't want to manage; I don't want to lead. I would much prefer if we shared the leadership and the doing, and found ourselves negotiating our journey together.

I have bubbles that recharge me where the world is peer to peer. Our group of regretful managers. My monthly benchmarks on work with a peer, now running for multiple years. I love these groups. 

But I also have other groups where I show up to hold space. Even in volunteering side, I find I volunteer to manage. Set context for decisions, while trying to stick to my own boundaries of what I can take on. And making space with spoons to ease anxiety of others. Reminding them they are enough, because while I can tell that to myself and believe it, some people need to hear it from me. 

With these stories, I would remind you: you are enough. You have agency. You have outlets - writing, talking from stage, talking to people. And just because it's not easy or even possible, it's not you. 

You are enough.

Monday, November 18, 2024

Cost-Constrained Exploratory Testing

The world of software used to run on-premise, and the basic promise was this: you buy a computer of the expensive sort, "server", and whatever you do on that ever since costs little. We did not calculate electricity or operations team attention to that server, it was just there. Costs were essentially there but distributed and hidden. 

Then we got the cloud and now we pay per use. I can't be the only person who caused 2k€ extra costs on her 1st month on cloud use by not understanding all aspects of pay per use and differently priced services, but once burned, I have been more cautious. 

With that cost caution in mind, I set out to observe my thinking and actions when trying out a new genAI tool published today, Hercules, https://github.com/test-zeus-ai/testzeus-hercules

I loaded some money on my personal OpenAI API account. I verified that my settings would lead me to losing the money I loaded but no more. I created the API key I needed to run Hercules. 

First four tests I explored cost me 0.36€. Two later, I am at 1.13€ and well aware of cost of exploring. I also note that awareness of the cost makes me consider somewhat more carefully what I will try. 

Hercules?

The high level promise of Hercules is to do agentic transformation (fancy way of saying multiple LLM calls within a logic frame that could be just about anything) of Gherkin to test results. So given this: 


I get this: 

No code written. Annoying level of detail with entering inputs, pressing buttons and all that, but a pass as it should be for this case. 

That was my exploratory test #2. The first one skipped line 7, resulting in a fail because you have to press the button to see the results. Tests #1 and #2 cost me 0.18€, and did not scare me off on cost-constrained and cost-aware exploratory testing I was on. 

With test #3 I invested changing the level of language for my gherkin file. Adding the URL to my gherkin examples for my exploratory testing foundations course, I went about seeing what would happen with three tests in single feature file, where one test is two tests parametrized totaling this to four tests, 

Again a green run. Three tests not four, but watching video evidence of the last, both sub scenarios were included. 

Where test #2 execution cost was 0.085, this one cost me 0.312. Whatever the unit, because it did not match what I ended up seeing on the costs panel in openai portal. 

Test #4 I dedicated to seeing a test fail for the right reasons. Taking incorrect prime analysis and setting expected values to to calculate 8 words for "to be or not to be - hamlet's dilemma", I indeed got the fail with an unexpected error analysis. The words on my tests and on the UI for the concepts don't match literally, and I wrote them in a different order, and yet the connecting of concepts hit the mark and compared right things. 

Tests #1-#4 cost me 0.36€ to run. 

For test #5, I was sure I would end up adding to the cost. I took a screenshot of the application and passed the screenshot to Claude asking for Gherkin feature file. 

Not quite perfect scenarios. Discouraged words is a concept that is completely misinterpreted, but that also tells about the UI concepts not being intuitive. Example for e-prime mastery tries to avoid the verb 'to be' but ends up still having one amongst the avoided examples. 

Running the test, I start to see 429 responses from the API - asking too much too soon, at least as per my cost aware settings and after some minutes I decide to not risk the cost, paying 0.50€ for this one failed experiment. Failed as in did not produce the report, but did produce some of the videos. 


Video showed me that not specifying where the app I want tested results in testing eviltester's version of the same. First hit on google and all that. 

The three first tests resulted in failure. Will be is not be, and other imprecisenesses in the scenarios end up requiring some more work. 

<failure message="EXPECTED RESULT: The tool should identify the verbs 'am', 
'is', and 'will be'. ACTUAL RESULT: The tool identified the verbs 'am', 'is', 
and 'be'."/>

Final test #6 was against a live system of a pair I tested with. We tested search, with a passing test, ending up at €1.13 for my out of pocket cost for exploratory testing a new thing, 

That investment pays me back a "coffee or beer" next time I meet the tool creator. Or when revealing pepsi max as my beverage of choice, a six pack of it. I ended up finding a bug on telemetry and the bug ended up being fixed and fix deployed already. 

Conclusions

Every time one explores under a constraint, it has an impact on thinking and resulting intent on action. Cost-awareness drives me to think about what information I am seeking before hitting the tool. 

Costs are one side of the coin, but we do pay a lot more for people figuring out locators and clicks to implement what gets generated here. 

Costs drive reuse, and wishes that we would share the scenarios so that we don't have to rerun the same. Learning from what others already paid for is in the future.

Generating worthwhile gherkin might still be a human effort for now. 

Replay without the GPT 4-o cost would be nice but we did not find it - yet. 



Friday, November 8, 2024

Dear customer, what you assess me for is extremely ambiguous

Dear customer, 

I am delighted to note you have come to realize you could use some testing services. I am delighted, because I deeply care about how those services could help customers work out their quality, productivity and decision-making areas with timely empirical information. And I am even more delighted I get to be in the position to receive such requests. No sarcasm. I look forward to any level of collaboration we may have, be it the RFP you just submitted or the potential engagement that could follow. 

That said, I need to bring to your attention that I am puzzled with many things you are asking for. 

As someone who studied at Helsinki University of Technology for 251.25 cr (just checked) but never graduated, I have a bit of a pet peeve on the idea that my 27 years in testing don't qualify me to work on your projects. I can justify your request for Bachelor or Master of Science degree by balancing it with the fact that that your other requirements often speak of someone with little years on the job, and delight myself in how that is a smart move on the requests. Balance of cheap - some experience - completed studies bodes good. I would wish you would allow for expensive sold cheap - lots of experience - knowledge you seek of the studies without completion. But that is kind of a personal request. 

That is not why I am writing this public letter to so many of you though. That is just a backdrop. The real reason for is that you ask other things, and I don't think you realize how ambiguous those requests you make are. I am sampling some from the 5 months of samples I now have had access to. 

Requesting Acceptance Testing experience

When in 2001 I summarized my literature research on Testing in Extreme Programming (XP), the complete lack of tester in the method was a key takeaway. Acceptance testing was used as a way of describing a particular style of functional tests in the team. The world of testing knew these as system tests, yet the term Acceptance tests took a significant foothold. 

Meanwhile, the rest of the world thought of Acceptance testing as anything that the customer organization would do for purposes of acceptance. It could be anything from accepting based on a report of other testing, to very detailed and organized test effort ensuring the system adheres to explicit and implicit requirements in the scale that was asked for (and paid for) as well as fits the purpose of use it was commissioned for. Key takeaway is that it is something the customer organization does. 

When you ask for experience of Acceptance testing, you could mean modern end to end automation on GUI and API, driven by examples. You could also ask if we have people who worked on customer organization payroll. You could ask if they have been specifically commissioned to represent a customer organization in an acceptance testing project. 

You probably had an idea of why you asked for this experience. You would do better if you could explain that. 

Requesting Integration Testing experience

If there ever was an ambiguous term, integration testing is guaranteed to be the one. Ask 10 people, you get 10 different answers. 

It used to be popular to call rehearsal rounds to system testing integration testing. The only real separation between these two were that contractors would get to keep bugs they found a secret from integration testing, but had to share them with the customer while in system testing. I am almost certain you weren't looking for people who have played this game. It was really popular 15-20 years ago. 

Those tired of the games started calling it integration testing if we tested a feature while other features were still unimplemented. Usually the focus was on integrating that feature to other pre-existing features. 

Some books recommended integration testing is when we have components or subsystems, and we specifically focus on the back and forth communication between those parts. People rarely knew how to do this in practice though.

Other books emphasized that integration testing is about integration strategies, focusing on smaller scales in large and complex systems, advising to choose an incremental strategy over a big-bang strategy. There integration testing meant control of test environments. 

Many people though went with a very straightforward idea. They called it integration testing if you could test through an API. However, most people would call that API testing rather than integration testing. 

You probably had an idea of why you asked for this experience. You would do better if you could explain that. 

Requesting Test Automation experience

You asking for test automation is different kind of ambiguous. It is hard to find people who don't have experience of test automation in their teams, but only some write it themselves. You may not realize, but people who have test automation experience but don't write it themselves but through exceptional collaboration with developers may do better in long term automation efforts you could be seeking. 

My personal pet peeve is that a lot of people who know how to automate don't know how to test. So they have experience in test automation but that may not be the test automation you want. 

You probably have more specific needs than the high level category. Maybe you have existing tests that need maintaining? Maybe you are about to get started? Maybe you need build pipelines rather than the tests? 

You probably had an idea of why you asked for this experience. You would do better if you could explain that. 

Requesting UI Testing experience

By now, you already know what I am about to say. Experience of WebUIs is different to experience on WindowsUIs, or the intricate details of Java UIs intended to be cross-platform. You may mean usability testing. You may mean specific technology. You may mean awareness of isolating UIs from the backend so that you can do great UI component testing enabling UI changes of the future. 

You probably had an idea of why you asked for this experience. You would do better if you could explain that. 


I want to help you. I want to find you the right people, exceptional people. I want those people to enjoy working in your projects, doing good work, and getting praise. And it would be a lot better for us all if we ended up aiming for the same thing. 

The ways you ask in public sector bids aren't helping you. We've learned how to work with those, and change is so much work that we just do what you ask. Unless you ask that we figure out better together. I am sure I am not the only contractor representative would be happy to volunteer to work with you for better. Perhaps we could do this as a joint community effort, customers and contractors roundtable style? 

Tuesday, October 29, 2024

Multi-Model Assessment Practice for Describing Skills

I keep making the same observation: we are not great at verbalizing our skills and knowledge. Not just about testers and testing, but particularly about testers and testing. Instead, we tend to behave as if testing carried more meaning of similar work and results than it does. 

Just a couple of examples. Testing could mean that they: 
  • Reproduce customer reported issues from customer acceptance testing, improving insufficient reports and running change advisory boards to prioritize the customer issues. 
  • Spend time conducting a technical investigation of the system and its features and changes, reporting discrepancies they are curious about. 
  • Maneuver builds between environments so that customer acceptance testing can do what they feel like doing on the version targeted for production. 
  • Reboot test environment on Wednesdays since by that time the low-end test environment has seen enough of week to be less performant.
Well, you could continue the list. You could describe the things you routinely do. You could try avoiding saying you plan testingdesign test casesexecute test cases and report results. You could try other words for size, and you could describe things you routinely do, have done in the past and could do again, and even things you are working your way into including them.  Elizabeth Zagroba does this exceptionally well with her trustworthy resume

As consultant, my CV (as well as my consultant colleagues CVs) are often looked at for selecting us for delivery projects. Needing to create the concept of Exceptional Consulting CV, I am in the process of experimentation of what works for me. Since my latest version of CV is labeled confidential, I won't go about sharing it. Instead, I share some thinking that I have been doing for it. And well, for annual performance reviews. The two reasons to wrap your skills and competences in a nice sales-oriented package. 

Personal Skills

My first infographic is inspired by the re-emergence of frustration on how consulting CVs are often looked at for skills that are equivalent for "has painted with blue color" without considering transference of experiences between technologies, or combinations making application of a particular color particularly insight-driven from experiences. I have written python code. I have also written Java code. I choose to not list language specific application frameworks even though I have used some of them, because I am not making a case for hiring me as a developer. I want to focus on developing for purposes of test problems


At this point of my career and my interests, I am very confident in my belief that I can learn technologies and application domains so that my feedback after 6 months includes remarks of surprise on the things I know. I have learned to learn. That's a rainbow skill through - so meta, and particularly hard to explain.

Roles

Past experiences mean little if the future I aspire isn't about repeating past experiences. The future I aspire is one of testing specialty. But alas, testing is not a single specialty. At minimum, it's three. And while I may be assigned the default role of Test Lead, I always prefer the role of Test Analyst and will intertwine that with whatever ideas people have about Test Automator. I call this mix Contemporary Exploratory Testing and my insistence on this mix leads me to another infographic, the idea of how you can your order of growing on main areas of skills depends on the role. 


There are beginner test leads, beginner test analysts and beginner test automators. Acquiring them all is an action distributed over decades, and juggling tends to create a fair share of trashing for focus. 

Hats

Since roles is taken for the concept above, I have been particularly playing with observing people with hats. These are my descriptions of the more detailed (yet not detailed at all) listing of what components testers (and developers testing) build their test role from. I create the first version of this four years ago, observing my team working with embedded software testing. In that particular area, test lab maintenance, the hat of 'test technician' is easily a full-time job. Attending to the artificial sky, coordinating who gets to stand inside the radar that fits 5 people, simulating rain by taking your test environment to shower - all experiences that I have now witnessed and scheduled. 

Since the 1st version that I presented as a keynote at Appium Lite Conference 2021, I had my later team of mostly developers assess their focus (and make agreements on the gaps) and today I updated the model. A version with resolution you can read the purple box fine print is also available.  


The focused / occasional / aspiring intersection for 3x17 (roles x hats) is a way for indicating coverage. Very tester of me to want a coverage measure on a model. 

Levels and Proficiencies

Generally I am not a great fan of assessing skills on a scale. As soon as you stop using a skill, skills atrophy. As soon as you are applying a skill and paying attention to thoughtful application, you are improving. Knowing where you are takes a lot of conversations and there is no better way of learning your place on journey and making steps ahead on the journey than ensemble testing. But say you want or are asked to describe levels and proficiencies anyway. Well, I am.  


I have recently discovered love of Hartman proficiency taxonomy. On level of hats, I get to a richer vocabulary to explain where I am, and where I am growing. On level of roles, granularity suffers. And on level of task analysis, I can really use it to drive home growth. 

Industry-wide benchmarking

For industry-wide benchmarking, where I am at we are using WTW. For obvious reasons I am mostly curious on how I am leveled and whether that is right since some folks use this to argue that Women's Euro is about same as Men's. I feel like I should try voting for placing me on the Professional / Expert scale where I reside. 


It's a career progression model. Professional has QA track within. And I am somewhere half in Professional, Half in Management, not fitting anyone's model as such. 

Engineering Ladders

Final piece of multi-model assessment practice walkthrough is engineering ladders. Doing justice to testers in this is always a bit of an exercise for the other models, but this grounds us all on one. And I appreciate that. I don't believe there is a magical tester mindset. Developers can test, the split we have is less of mindset and more of allowing for time for perspectives. 

Test leads tend to be growing on the people, process and influence axes. Test analysts tend to grow on the system and process axes. Test automators tend to grow on technology and system axes. I have worked with people who can work, skills-wise, on full range of five dimensions, but they can't do it time wise. Full range is a team of people. 

Multi-model Assessment Practice?

A fancy name, right? It basically says take as many models as you feel you benefit from. This was my selection for describing skills, today. I have another selection for test process improvement assessments, which I also do multi-model: with Quality Culture Transition Assessment (modern testing), TPI-model (testing), and Testomat Assessment model (test automation), spicing it usually with Context STAR model, and whatever complementary models I feel I could use based on my experience as assessor on observations I need to illustrate. I illustrate a lot of things. Like the things in this blog. 

Create your own selection and use bits of what I have if that is useful. I care less for a universal model, and more for a model that helps me make sense of complexities I am dealing with. 




Monday, October 28, 2024

Selenium is the friends we made along the way!

Today, October 28th, is a relevant day. While it is National Chocolate Day for our American colleagues, it is also the official date when we commemorate Selenium. 

October 28th 2024 Selenium turned 20 years, and we celebrated the momentous occasion in an event lead by Pallavi Sharma, available on the Selenium Project's YouTube channel. On stage you could see Diego Molina and Puja Jagani from Selenium Project Technical Leadership Committee, and in the conversation bit in the end a balance of generations of leaders in the project: Jason Huggins, Simon Stewart, David Burns, Alex Rodionov, and myself. There was some generations of information wealth also in the YouTube chat and it was great seeing people connect for the occasion. 

All of this made me reflect again on the marketing / communication confusion that is Selenium. What is Selenium, really? With at least three major milestones over 20 years in the technical architecture to solve a challenge of *automating browsers*, we keep having conversations along these lines:

  • We should really talk about webdriver, because that is the current implementation and whatever we are doing with webdriver bidi, we will try to maintain the public facing APIs and do changes a layer down in the stack
  • People still can't separate rc, grid and webdriver! And webdriver and selenium manager! 
  • Should I automate with selenium ruby or use Watir? Selenium python or SeleniumBase? 
  • Is it selenium if it uses the protocol and relies on browser implementing the protocol but not the components Selenium project makes available? 
Today, the conversation reframed what is selenium. Selenium is the friends we made along the way. It is an organization and collective / community coming together to work on browser automation in a way where ownership is decentralized with carefully laid out governance.  


When Selenium is the friends we made along the way and the organizational frame for making friends while solving a problem worth solving in an open source frame, it makes even less sense to compare it as if it was a tool. 

In the box of Selenium for its birthday, we can find: 

  • Generations of technologies that are Selenium. Selenium RC. Selenium WebDriver. Selenium WebDriver BiDi. Selenium Manager. Selenium Grid. 
  • Community solving browser automation AND testing. Individually (because browser automation is not only for testing; testing in browsers is not only with Selenium) and combined. Solving includes projects to drive positive change with standardizing browser automation, showing exemplary cross-industry collaboration and openness. and having the entire community cross-pollinate ideas sharing them in conversations, posts and conferences. 
  • Ecosystem enablement. Things are powered by Selenium. Open source and commercial. And not only powered, but inspired. While the comparisons of Selenium vs. others seem to miss the mark on comparing apples and apple trees, Selenium powers a lot of the conversations. 
  • Governance, the structures that allow a project to outlive its creators and maintainers. Agreements that remain and keep it open, and agreements that help the project build forward in the fiscal and legal necessities that legal entities have. 
With two years on the project, my contributions are on everything but the technology. And there's plenty of work on all fronts for us to do, so whatever your angle is, you are welcome to join us. Perhaps in a year, we can tell a more recent story of how someone got through a job interview with this community's help than the one we reminisced today from 10 years ago. I would want to see 10 of these stories a year. 

Happy birthday Selenium. You are international community of brilliant people, the friends we made on the way. And our way forward continues, with more friends to make. 


Friday, October 25, 2024

Dividing future of testing more evenly

I have been a consultant specializing in testing now for five months at CGI. Those months have given me a reason of thinking about scale and legacy and relevance, driving forces that lead to the negotiation of tailoring me a role amongst great people. I have built myself up to a platform of actionable testing knowledge, and I need to figure out how to do things better. For more people while still having limited time. For building things forward even when I am not around. For seeing a positive change in quality and testing for companies in Finland. 

When thinking about where to go, knowing where you are matters. And looking at where we are, in scale, is an eye opener. Everything I have left behind in learning about how to better do testing, it all still exists. But also, there is a steady stream of new good insights, practices, tools and framings that drive an aspirational better future of testing. Steady stream of awareness though because I have come to realize that, by words of William Gibson: 

Future is already here, it's just not evenly divided. 

Sampling in scale is how I believe we find our future. Sampling, and experimenting. Evolving from where we are with ideas of where we could try being. 

Being awarded so many great people's time and attention for conversations, both with my consultant colleagues and many of our customers, I started collecting the themes of change. First listing statements of movement in behaviors expected, then structuring that further an open space conference session, to finally figuring out the created dimensions into a visual of a sort. 


The future that is here and needs distributing more evenly, summed up to four themes. 

Automation

Lifecycle investment. More than regression testing. 

The changes in this theme are: 

  • From transforming manual tests into automated tests to decomposing testing for scales of architectures
  • From testing end to end integrated systems to decomposing feedback mechanisms differently
  • From change control to releases as routine.
Automation is not optional. It's not about what we automate, but what we automate first. Organizations need ownership of software over lifecycle of it, and people will leave for positive reasons of growth and opportunities. We can't afford the change in software without automation. But automation is not end to end testing as testers have learned it. It is reassignment of relevant feedback to design of test systems. Change is here to stay, and the future we need is one where testing looks essentially different. 

Opportunity cost

Lean waste awareness. Shift continuous. 

The change in this theme are:

  • From significant time to test after collection of changes to significant time to document with programmatic tests that test continuously
  • From doing testing tasks to choosing actions for purpose of quality
A lot of what we consider testing is wasteful. Not doing exploratory testing is wasteful. Overemphasis on specifications is wasteful. Waiting with feedback is wasteful. We need to make better choices of how we use our time. We need agency and collaboration. Writing test cases into a test management system is not the future. We shift left (requirements), right (monitoring) and down (unit testing). And we do smaller increments, making testing continuous so that left and right aren't really concepts anymore. 

Social

Developers test. Learning is essential. Quality engineering. 

The change in this theme are:

  • From quality assurance through quality assistance to quality engineering
  • From testers as holders of domain knowledge to everyone as holders of domain knowledge
  • From teams of testers to testing by teams
  • From testers to team members with testing emphasis
The work gets distributed so that titles don't tell us who does and knows testing. Interest and learning tells something about that. We collaborate for trust so that we understand what others test, and we own quality together. Test specialists may be called developers, and developers are growing into exceptional testers. Instead of making testing smaller, we pay attention to mechanisms of feedback where some might not qualify as testing to some folks. 

Next generation artifacts (AI)

Programming and test programming. Insights to recall. 

We can't generate more artifacts that people don't read. We need to revisit opportunity cost, and have executable documentation, and mechanisms of recalling insights even when people change. 

Future 2011

I have been daring to speak on future before, especially when showing up with a friend, Erkki Pöyhönen. We did a talk in 2011 of two predictions for future. I translated our conclusions from that time for international access. 


The approach we took for the talk was to talk about three things we believe are core to change we will experience. We also reflect what would have to be foundationally changing in what we stand on for things to be different. 

Eki most definitely did see the work on specification by example. Coming from long cycles and organizational splits that I live with now, his platform was on how people will look at ownership and thus distribution of knowledge and skills. 

I was expecting things I still expect. Less testers, everyone tests. But I was not ready then to see how much of a role automation will play, still framing it as computer assisted testing. I said that AI will rock my foundation, and it will. 




Wednesday, October 16, 2024

Not an ad but a review: Copado Robotic Testing

The world of low code  / no code tools is an exhausting one. I've looked at a few in last weeks, and will look at a few more. I don't try to cover them all, but I am trying to do a decent tour to appreciate the options. Currently my list of looking looks like this, in alphabetical order: 

  • Alchemy
  • Curiosity
  • Copado Robotic Testing
  • KaneAI
  • Leapwork
  • Mabl
  • SahiPro
  • Tricentis Tosca
  • UIPath
  • QF-Test
I am well aware that while my list is 10, the real list counts in hundreds. The space is overwhelmed with options. Overwhelmed enough that asking the real users of these tools their experiences, the conversation slides to DMs (direct messages). It feels a bit like dipping one's toes in hot water to start conversations in this space. 

Learning out in the open, today about Copado Robotic Testing, I accepted the kind offer to level-appropriate demo from a conversation started last week. Level-appropriate in the sense that people demoing were aware of what I know already and may avoid the shiny over the real. Thankful for Esko Hannula and Copado team for the very personalized experience. 

Before drilling more into what I learned, I must preface this for why am I writing about this. First of all, I am a great fan of great things we do in Finland. Copado Robotic Testing has roots in Finland, even if they, like any successful companies, outgrow geographical limits. Second, I am NOT a great fan of low code, and anyone showing me products in this space tends to fight an uphill battle on how they frame their messaging. 

I believe this:

Programming is like writing. Getting started is easy and it takes a lifetime to get good at. 

Visual programming languages tend to become constraints rather than leveling people up. Growing in space of a general purpose programming language tends to build people up. Growing in space of a visual programming language tends to constrain problems. Quoting a colleague from a conversation at Mastodon: 

All these things (orms, 4gl, dnd-coding, locode) seem to follow the paradigm of "make simple things simpler, hard things impossible and anything in between really tedious"

Kids graduate from Blockly in a few sessions by having aspirations too big for the box they are given. I am skeptical that adults wouldn't experience the same with low code. Drilling into the expectation that a user should be able to select between "Get text", "Get Web Text", "Get UI Text", "Get Electron Text" and "Find Text" based on whether the app is green screen, web application, desktop application, Electron desktop application or virtual desktop application is kind of stretching it. That is to say that the design of the simplification on the user perspective really matters and it is less than obvious looking into some of these tools. 

Enough of prefacing, let's talk about the tool. Copado Robotic Testing sits on top of Robot Framework. Robot Framework sits on top of Selenium. Selenium sits on top of Python. Anything aiming for low code would hide and abstract away most of this stuff. I mention it because 1) respecting your roots matters 2) stack awareness helps address the risks of moving parts. 

Robot Framework is test orchestration framework with libraries. Copado's approach starts with a few Robot Framework libraries of their own. A core one is called qWeb, and that core one is available as open source. 

If I take nothing else out of my hour of learning with them, I can take this: Robot Framework ecosystem has at least three options for libraries driving the web: Selenium library, Browser (playwright) library, and qWeb library. The third one has a lovely simplicity to it. User language. Reminds me how many versions of custom keywords on top of Selenium library I've seen over the years being called "work on our internal test framework" in every single project

Copado's product has a recording feature they call Live testing. On one screen, side by side you'd operate your web application, and watch script being built on top of the qWeb (and other qwords style) -libraries. Since this script is a Robot Framework script, I can see a way out with fear of dependencies of possible lifecycle changes on products as the script can be written too. It can be used independent of the product. And the part I appreciated particularly, saved up in version control and diffing nicely so that making sense of changes while I looked the other way can be made sense on for those of us with practices relying on it. 


Writing the scripts in Visual Studio Code with Copado's add-on was mentioned while not demoed. Two IDE homes with essentially different user groups is an approach I appreciate. Nice bridge between the way things are and the way things could be. 

We looked at running tests, reports over all the runs and the concept of running tests in development mode to not keep around results while developing tests. We discussed library support for iFrames and shadow DOM, and noted there's platform product specific libraries available in the product, namely qForce for SalesForce testing capturing some of that product's testing complexities. 

From a fresh memory of a day to sort out version dependencies in languages and libraries this week, I could appreciate the encapsulation of the environment you need to run tests from into the product. 

We also looked at current state of AI in the product, now called Agents. While I feel like assistant style features of generating tests from text or explaining code are a little less than word 'agents' would lead me to expect, it serves as a placeholder for growing within that box. Assisting AI well integrated into an IDE like this one is a feature we are learning to expect. and benefit from. 

Finally, we looked at a feature called Explorer. The framing of it being even lower code than the text-based Live Testing is something I can link with, yet framing the path we click and things we note as exploratory testing is something I would be careful with. The idea of hiding text from certain groups of users seems solid though. Nice bridge, even if not a bridge from exploratory testing to automation as I would think, but a bridge from business user testing to automation.

While Copado team opened the session with five things to preface test automation conversations with, I close with the same. Test environments, vendor lock in, Subject matter expert inclusion, Results sharing and Test maintenance definitely sound like problems to design against in this space. Copado team's choices on those made sense to me and show they have a solid background in this problems and solutions space.  

For me that was an hour well spent, and lead me into a further conversation on the Robot Framework libraries for driving web user interfaces. Lovely people, which I already knew. And a worthwhile tool to consider. 

Monday, October 14, 2024

Complexity draws us to low-code solutions

If there is one conversation I find myself having ever since I became a test consultant this June, it is the one of clarifying the space of test automation tool options. There are options. Lots of options. And it is not the easiest of all things to make sense into the options.

Even if I simplify the options to free options and browser testing, I face the conversation of the three: 

Selenium, Playwright, or Cypress?

You may guess it, the conversation even with these is less obvious than you'd think. Selenium is driving a standardization effort, which means that while it supports WebDriver Protocol now, it is already supporting WebDriver BiDi on some of the languages. That is, things are changing under the hood in ways many people in browser automation space will want to pay attention to. 

Watching things within the Playwright repo going on in pull requests, it looks like Webdriver BiDi is finding its way into Playwright too. And when that happens, it is an essential change on the overall landscape.

For now with using CDP, there is the definite need of regularly emphasizing that Chromium is not Chrome nor Edge. Webkit is not Safari. Playwright and Cypress, running on CDP don't do real cross-browser testing. The approximation may be sufficient. Single browser testing may be sufficient. But for now, you would need something based on WebDriver Protocol (like Selenium) if you wanted to automate for the browsers your users are using. 


To make matters more complicated, the three are not the only options. In the Selenium ecosystem for testing purposes, the recommendation is to use one of the frameworks built on top of it like Nightwatch or SeleniumBase or frameworks using WebDriver Protocol without using Selenium like WebdriverIO. Then again, using Selenium as the driver for any and many commercial tools is also an option, and you might not even know what powers the framework under the hood. There's layers. 


And features. 

However, when we talk about these tools, we don't talk about the layers or features. We talk about naming the tools, leaving it for the reader to figure this all out. 

The main benefit I find from low code platforms is their closed nature. You don't have to care what is outside the box and you have no control what is inside the box. It probably boxes in things you need to do testing. It simplifies the world. Instead of reading and making sense of all this and all the change to this, you can focus on the work of testing. 

Sometimes we argue about the tools so much that focus gets lost. We live with our choices long, and keeping things open has, in my perspective, more value than the simplification. 

Tuesday, September 24, 2024

Learning programming through osmosis

This article for written in 2016 for Voxxed that is no longer online. Back then I did not know POSSE and thus this piece has not been online for a while. 

I identify mostly as a non-programmer. Yet, two weeks into a new job I’m already learning and contributing to Python and C++ -code. The method that enables me to do this is ensemble programming, the idea of having a group of people working together on one computer on a task, taking turns on who types for the team while others instruct. For an idea to get from one’s head to get to the computer, it flows through someone else’s hands. 


This article shares key insights from my journey over a little over a year on learning programming through osmosis, just being around programmers working on code, without intention of learning. As a result of learning, I rewrote my history with things I had forgotten and dismissed from my past. I hope it serves as an inspiration for programmers to invite non-programmers to learn to code one layer at a time, immersed in the experience of creating software together to transform the ability to deliver. Lessons specific to skillsets get transferred both ways, and while I learn from others, they learn from me, leaving everyone better off after the experience. 


Finding Ensemble Programming


Many different roles contribute to building software: product owners, business specialists, and testers, yet, knowledge of programming keeps these roles at a distance. I did not come to programming through wanting to program or taking courses on it but through working with programmers in a style called ensemble programming. 


As a tester within my team of nine developers, it was clear I was different. I wasn’t particularly keen on learning programming since there was more than plenty of work in the feedback through empirical evidence and exploration that is my specialty I’ve developed in depth over two  decades. I’m an excellent exploratory tester and my team’s developers have always been my friends with a pickup truck that I can call in for assistance on anything where code needs to be created. Besides being the only non-programmer, I was also the only woman and part of a team, where some people would occasionally spout out things like “Women only write comments in code.” Not exactly an inviting starting position. 


Although I did not like programming, my hobbies that started at the age of twelve and my computer science studies, that further killed my interest in programming, I  had acquired experience in coding twelve different languages. I started making small changes in how I looked at programming in a different light for my daughter’s sake, as I did not want to transfer my dislike of code to a 7-year old about to be embedded in an elementary learning environment where programming is everywhere as programming is a mandatory part of Finnish curriculum now. 


The real change, however,  started with Woody Zuill’s talk in a conference I organized. Woody is the discoverer of ensemble (mob) programming. The idea of the whole team working on a single task, all together on one computer just sounded ridiculous, yet as ridiculous as it seemed, I thought it could be a way for my team to learn from one another as well as create team building. Instead of taking someone else’s word on methods, I have a preference on experiencing them first hand. And it wasn’t like we had to commit for a lifetime, just to try it out once or twice.


The First Experience Expands


With some discussions, my team agreed to try it out, but I knew I would be out of my comfort zone since I would have to be in front of a computer working on code. Our first task was to refactor some of our code with Extract Method and Rename automatic refactorings and we had an experienced ensemble facilitator lead the session for us. While not on the keyboard, I found myself able to comment on the names from the domain, and while on the keyboard, I noticed with each round that I was picking up things: keyboard shortcuts, ways to navigate, programming concepts without anyone really explaining them to me when the work was being done. In the retrospective, I could reflect on my learning and realized that not only was I picking up things I did not know before, everyone else was doing that too. 


I felt safe in a group, as I did not need to be fully paying attention to every detail at any time, and I was always supported by a group. Surprisingly, the expected negative remarks on gender did not come out in a group, whereas they would be a regular thing in a more private pairing setting. 


From that first experience, my team extended this to a weekly learning activity. I took the mechanism of learning for myself further, organizing various ensemble programming sessions with the programming community on different programming techniques and languages, learning e.g TDD and working with legacy code in a hands-on manner. I introduced my team to ensembling on my work, exploratory testing and they learned to better identify problems. In our ensemble programming sessions, there were several occasions where my existence in the room fixed an expensive mistake about to happen from half a sentence of discussion. Finding a problem like this early on led to more efficient and productive work for everyone. Although it seems inefficient to have so many people working on one thing at the same time, the saved time in avoiding context switching, passing feedback back and forth, increased focus on steps to complete together with great quality,  as well as learning made us develop much faster and with less future problems.  


Joining An All Female Hackathon


I took the idea of ensemble programming to a weekend hackathon outside work and convinced my fellow teammates to try it out, but only three people decided to be involved out of four.I avoided setting the expectations of me being a non-programmer and just joined in with whatever programming skills I had, without disclaimers. There was even a woman participating with less coding experience with, as she had never even looked at code before. 


Out of that weekend, I came out with four major realizations:

  • The best programmer outside the ensemble only contributed graphics. In the ensemble, we were adding one feature at a time and committing regularly, and the senior programmer found it hard not to have modules of her own to work.  There was no long-term plan for incrementally developed software and the version kept changing under her. We tried summarizing the lessons on the used technology for her, but she kept hitting problems that blocked her. 

  • I passed off as a programmer. No one noticed I was not a programmer. And the reason was that I had become one. I realized that programming is like writing. Getting started is easy, and it takes a lifetime to get good at. 

  • The non-programmer felt like an equal contributor. Her experience was that the code created was just as much hers as any of the others and that is a powerful experience. She learned the basics with us through typing for us, and reflecting with us. 

  • We had working software. Not all groups had the same luxury. In the ensemble, we had the discipline to have not just code, but working code to a scope that could vary depending on how much time we had to add more functionality. 


My Main Lessons


Cognitive dissonance is a powerful tool


The experiences of working with a ensemble for over six months transformed how I perceived myself. No amount of convincing and rational arguments on how much fun programming is could have done that. When my actions and beliefs are not in sync, my beliefs change. And that is what ensemble programming did to me. It made me a programmer, through osmosis, and got me started on a long journey of always getting better at it. 


Non-programmers have a lot to contribute


I saw that while I was learning a lot, I was also contributing. As a tester, I had information about intents of the users that seemed mysterious to my programmer colleagues. We would test better while programming, just because I was there. We would avoid mistakes that were about to happen, just because I was there. I could give feedback without egos in play, and we could all learn skills from one another. And even me being slow was a positive thing - it made the other programmers more deliberate and thoughtful in their actions, and they shared the realization that they created better code while slower. I ended up feeling really proud of how much better my developers learned to test with our shared ensembling time. 


Team got out a lot


I wasn’t the only one who learned - everyone in the team picked up different things. It was a pleasure to see how abilities to add unit or selenium tests expanded from individual to a team skillset, and how many times we found better libraries because just one of us was aware of it. 


We slowly moved from working on technical debt and cleaning up to a shared standard to having technical assets in the form of libraries that would enable us to do things faster. 


Everyone got their voices into the code better. We worked with the rule that if we had several ideas of how a problem could be approached, we would do both over arguing while we had the least practical information about how it would turn out. And it was surprising to notice that something that someone would fight to the bitter end with, was good enough to accept after the implementation was available, and not just because people would lower their standards. 


We also learned that when one of us did not feel like contributing in a ensemble format at first, it was a good idea to let one opt-out. The party-like nature of the sessions and the evidence of the rest of us bonding and learning inevitably drew these non-participators back in on their own initiative later on.   


Ensemble Programming as a Practical tool of Diversity


Ensemble programming is a great way of introducing new people to programming, or testing for that matter. It transfers a lot of the tacit knowledge otherwise difficult to share. It brings the best of us to the work we do, as opposed to the most of each individual. While working together, we can remove a lot of the rework with fast and timely feedback. We raise our collective competence, allowing individuals to use specialized skills. We used a rule “learning or contributing” to give a great guideline in thinking of when a ensemble is doing what it is supposed to. 


As software is such a big part of our society’s present and future, we need all hands on deck in creating it. We need to find ways of bridging roles without telling others that everyone just needs to be a programmer. In an ensemble format, I learned that while I picked up my hidden interest in programming, I would have been a valuable contributor even without it. There was a struggle for both me to go do things I thought I wouldn’t enjoy and the team to work in a setting they were not used to. It was worth the struggle to remove the distance I previously felt between myself and the programmers. 


Just adding more women and people of color to the field of software development isn’t enough if the people struggle to get their voices included.  We need to do more than make the world of coding look diverse. With ensemble programming we can use that diversity to innovate the world of coding overall. (Props on this thought to Kelly Furness, who was in the audience with my DevOxxUK talk) 


It’s not just learning programming by osmosis, but the learning is mutual. Give it a chance.


About the author


Maaret Pyhäjärvi is a software professional with testing emphasis. She identifies as an empirical technologist, a tester and a programmer, a catalyst for improvement and a speaker. Her day job is working with a software product development team as a hands-on testing specialist. On the side, she teaches exploratory testing and makes a point of adding new, relevant feedback for test-automation heavy projects through skilled exploratory testing. In addition to being a tester and a teacher, she is a serial volunteer for different non-profits driving forward the state of software development. She blogs regularly at http://visible-quality.blogspot.fi and is the author of Ensemble Programming Guidebook.