Tuesday, October 29, 2024

Multi-Model Assessment Practice for Describing Skills

I keep making the same observation: we are not great at verbalizing our skills and knowledge. Not just about testers and testing, but particularly about testers and testing. Instead, we tend to behave as if testing carried more meaning of similar work and results than it does. 

Just a couple of examples. Testing could mean that they: 
  • Reproduce customer reported issues from customer acceptance testing, improving insufficient reports and running change advisory boards to prioritize the customer issues. 
  • Spend time conducting a technical investigation of the system and its features and changes, reporting discrepancies they are curious about. 
  • Maneuver builds between environments so that customer acceptance testing can do what they feel like doing on the version targeted for production. 
  • Reboot test environment on Wednesdays since by that time the low-end test environment has seen enough of week to be less performant.
Well, you could continue the list. You could describe the things you routinely do. You could try avoiding saying you plan testingdesign test casesexecute test cases and report results. You could try other words for size, and you could describe things you routinely do, have done in the past and could do again, and even things you are working your way into including them.  Elizabeth Zagroba does this exceptionally well with her trustworthy resume

As consultant, my CV (as well as my consultant colleagues CVs) are often looked at for selecting us for delivery projects. Needing to create the concept of Exceptional Consulting CV, I am in the process of experimentation of what works for me. Since my latest version of CV is labeled confidential, I won't go about sharing it. Instead, I share some thinking that I have been doing for it. And well, for annual performance reviews. The two reasons to wrap your skills and competences in a nice sales-oriented package. 

Personal Skills

My first infographic is inspired by the re-emergence of frustration on how consulting CVs are often looked at for skills that are equivalent for "has painted with blue color" without considering transference of experiences between technologies, or combinations making application of a particular color particularly insight-driven from experiences. I have written python code. I have also written Java code. I choose to not list language specific application frameworks even though I have used some of them, because I am not making a case for hiring me as a developer. I want to focus on developing for purposes of test problems


At this point of my career and my interests, I am very confident in my belief that I can learn technologies and application domains so that my feedback after 6 months includes remarks of surprise on the things I know. I have learned to learn. That's a rainbow skill through - so meta, and particularly hard to explain.

Roles

Past experiences mean little if the future I aspire isn't about repeating past experiences. The future I aspire is one of testing specialty. But alas, testing is not a single specialty. At minimum, it's three. And while I may be assigned the default role of Test Lead, I always prefer the role of Test Analyst and will intertwine that with whatever ideas people have about Test Automator. I call this mix Contemporary Exploratory Testing and my insistence on this mix leads me to another infographic, the idea of how you can your order of growing on main areas of skills depends on the role. 


There are beginner test leads, beginner test analysts and beginner test automators. Acquiring them all is an action distributed over decades, and juggling tends to create a fair share of trashing for focus. 

Hats

Since roles is taken for the concept above, I have been particularly playing with observing people with hats. These are my descriptions of the more detailed (yet not detailed at all) listing of what components testers (and developers testing) build their test role from. I create the first version of this four years ago, observing my team working with embedded software testing. In that particular area, test lab maintenance, the hat of 'test technician' is easily a full-time job. Attending to the artificial sky, coordinating who gets to stand inside the radar that fits 5 people, simulating rain by taking your test environment to shower - all experiences that I have now witnessed and scheduled. 

Since the 1st version that I presented as a keynote at Appium Lite Conference 2021, I had my later team of mostly developers assess their focus (and make agreements on the gaps) and today I updated the model. A version with resolution you can read the purple box fine print is also available.  


The focused / occasional / aspiring intersection for 3x17 (roles x hats) is a way for indicating coverage. Very tester of me to want a coverage measure on a model. 

Levels and Proficiencies

Generally I am not a great fan of assessing skills on a scale. As soon as you stop using a skill, skills atrophy. As soon as you are applying a skill and paying attention to thoughtful application, you are improving. Knowing where you are takes a lot of conversations and there is no better way of learning your place on journey and making steps ahead on the journey than ensemble testing. But say you want or are asked to describe levels and proficiencies anyway. Well, I am.  


I have recently discovered love of Hartman proficiency taxonomy. On level of hats, I get to a richer vocabulary to explain where I am, and where I am growing. On level of roles, granularity suffers. And on level of task analysis, I can really use it to drive home growth. 

Industry-wide benchmarking

For industry-wide benchmarking, where I am at we are using WTW. For obvious reasons I am mostly curious on how I am leveled and whether that is right since some folks use this to argue that Women's Euro is about same as Men's. I feel like I should try voting for placing me on the Professional / Expert scale where I reside. 


It's a career progression model. Professional has QA track within. And I am somewhere half in Professional, Half in Management, not fitting anyone's model as such. 

Engineering Ladders

Final piece of multi-model assessment practice walkthrough is engineering ladders. Doing justice to testers in this is always a bit of an exercise for the other models, but this grounds us all on one. And I appreciate that. I don't believe there is a magical tester mindset. Developers can test, the split we have is less of mindset and more of allowing for time for perspectives. 

Test leads tend to be growing on the people, process and influence axes. Test analysts tend to grow on the system and process axes. Test automators tend to grow on technology and system axes. I have worked with people who can work, skills-wise, on full range of five dimensions, but they can't do it time wise. Full range is a team of people. 

Multi-model Assessment Practice?

A fancy name, right? It basically says take as many models as you feel you benefit from. This was my selection for describing skills, today. I have another selection for test process improvement assessments, which I also do multi-model: with Quality Culture Transition Assessment (modern testing), TPI-model (testing), and Testomat Assessment model (test automation), spicing it usually with Context STAR model, and whatever complementary models I feel I could use based on my experience as assessor on observations I need to illustrate. I illustrate a lot of things. Like the things in this blog. 

Create your own selection and use bits of what I have if that is useful. I care less for a universal model, and more for a model that helps me make sense of complexities I am dealing with. 




Monday, October 28, 2024

Selenium is the friends we made along the way!

Today, October 28th, is a relevant day. While it is National Chocolate Day for our American colleagues, it is also the official date when we commemorate Selenium. 

October 28th 2024 Selenium turned 20 years, and we celebrated the momentous occasion in an event lead by Pallavi Sharma, available on the Selenium Project's YouTube channel. On stage you could see Diego Molina and Puja Jagani from Selenium Project Technical Leadership Committee, and in the conversation bit in the end a balance of generations of leaders in the project: Jason Huggins, Simon Stewart, David Burns, Alex Rodionov, and myself. There was some generations of information wealth also in the YouTube chat and it was great seeing people connect for the occasion. 

All of this made me reflect again on the marketing / communication confusion that is Selenium. What is Selenium, really? With at least three major milestones over 20 years in the technical architecture to solve a challenge of *automating browsers*, we keep having conversations along these lines:

  • We should really talk about webdriver, because that is the current implementation and whatever we are doing with webdriver bidi, we will try to maintain the public facing APIs and do changes a layer down in the stack
  • People still can't separate rc, grid and webdriver! And webdriver and selenium manager! 
  • Should I automate with selenium ruby or use Watir? Selenium python or SeleniumBase? 
  • Is it selenium if it uses the protocol and relies on browser implementing the protocol but not the components Selenium project makes available? 
Today, the conversation reframed what is selenium. Selenium is the friends we made along the way. It is an organization and collective / community coming together to work on browser automation in a way where ownership is decentralized with carefully laid out governance.  


When Selenium is the friends we made along the way and the organizational frame for making friends while solving a problem worth solving in an open source frame, it makes even less sense to compare it as if it was a tool. 

In the box of Selenium for its birthday, we can find: 

  • Generations of technologies that are Selenium. Selenium RC. Selenium WebDriver. Selenium WebDriver BiDi. Selenium Manager. Selenium Grid. 
  • Community solving browser automation AND testing. Individually (because browser automation is not only for testing; testing in browsers is not only with Selenium) and combined. Solving includes projects to drive positive change with standardizing browser automation, showing exemplary cross-industry collaboration and openness. and having the entire community cross-pollinate ideas sharing them in conversations, posts and conferences. 
  • Ecosystem enablement. Things are powered by Selenium. Open source and commercial. And not only powered, but inspired. While the comparisons of Selenium vs. others seem to miss the mark on comparing apples and apple trees, Selenium powers a lot of the conversations. 
  • Governance, the structures that allow a project to outlive its creators and maintainers. Agreements that remain and keep it open, and agreements that help the project build forward in the fiscal and legal necessities that legal entities have. 
With two years on the project, my contributions are on everything but the technology. And there's plenty of work on all fronts for us to do, so whatever your angle is, you are welcome to join us. Perhaps in a year, we can tell a more recent story of how someone got through a job interview with this community's help than the one we reminisced today from 10 years ago. I would want to see 10 of these stories a year. 

Happy birthday Selenium. You are international community of brilliant people, the friends we made on the way. And our way forward continues, with more friends to make. 


Friday, October 25, 2024

Dividing future of testing more evenly

I have been a consultant specializing in testing now for five months at CGI. Those months have given me a reason of thinking about scale and legacy and relevance, driving forces that lead to the negotiation of tailoring me a role amongst great people. I have built myself up to a platform of actionable testing knowledge, and I need to figure out how to do things better. For more people while still having limited time. For building things forward even when I am not around. For seeing a positive change in quality and testing for companies in Finland. 

When thinking about where to go, knowing where you are matters. And looking at where we are, in scale, is an eye opener. Everything I have left behind in learning about how to better do testing, it all still exists. But also, there is a steady stream of new good insights, practices, tools and framings that drive an aspirational better future of testing. Steady stream of awareness though because I have come to realize that, by words of William Gibson: 

Future is already here, it's just not evenly divided. 

Sampling in scale is how I believe we find our future. Sampling, and experimenting. Evolving from where we are with ideas of where we could try being. 

Being awarded so many great people's time and attention for conversations, both with my consultant colleagues and many of our customers, I started collecting the themes of change. First listing statements of movement in behaviors expected, then structuring that further an open space conference session, to finally figuring out the created dimensions into a visual of a sort. 


The future that is here and needs distributing more evenly, summed up to four themes. 

Automation

Lifecycle investment. More than regression testing. 

The changes in this theme are: 

  • From transforming manual tests into automated tests to decomposing testing for scales of architectures
  • From testing end to end integrated systems to decomposing feedback mechanisms differently
  • From change control to releases as routine.
Automation is not optional. It's not about what we automate, but what we automate first. Organizations need ownership of software over lifecycle of it, and people will leave for positive reasons of growth and opportunities. We can't afford the change in software without automation. But automation is not end to end testing as testers have learned it. It is reassignment of relevant feedback to design of test systems. Change is here to stay, and the future we need is one where testing looks essentially different. 

Opportunity cost

Lean waste awareness. Shift continuous. 

The change in this theme are:

  • From significant time to test after collection of changes to significant time to document with programmatic tests that test continuously
  • From doing testing tasks to choosing actions for purpose of quality
A lot of what we consider testing is wasteful. Not doing exploratory testing is wasteful. Overemphasis on specifications is wasteful. Waiting with feedback is wasteful. We need to make better choices of how we use our time. We need agency and collaboration. Writing test cases into a test management system is not the future. We shift left (requirements), right (monitoring) and down (unit testing). And we do smaller increments, making testing continuous so that left and right aren't really concepts anymore. 

Social

Developers test. Learning is essential. Quality engineering. 

The change in this theme are:

  • From quality assurance through quality assistance to quality engineering
  • From testers as holders of domain knowledge to everyone as holders of domain knowledge
  • From teams of testers to testing by teams
  • From testers to team members with testing emphasis
The work gets distributed so that titles don't tell us who does and knows testing. Interest and learning tells something about that. We collaborate for trust so that we understand what others test, and we own quality together. Test specialists may be called developers, and developers are growing into exceptional testers. Instead of making testing smaller, we pay attention to mechanisms of feedback where some might not qualify as testing to some folks. 

Next generation artifacts (AI)

Programming and test programming. Insights to recall. 

We can't generate more artifacts that people don't read. We need to revisit opportunity cost, and have executable documentation, and mechanisms of recalling insights even when people change. 

Future 2011

I have been daring to speak on future before, especially when showing up with a friend, Erkki Pöyhönen. We did a talk in 2011 of two predictions for future. I translated our conclusions from that time for international access. 


The approach we took for the talk was to talk about three things we believe are core to change we will experience. We also reflect what would have to be foundationally changing in what we stand on for things to be different. 

Eki most definitely did see the work on specification by example. Coming from long cycles and organizational splits that I live with now, his platform was on how people will look at ownership and thus distribution of knowledge and skills. 

I was expecting things I still expect. Less testers, everyone tests. But I was not ready then to see how much of a role automation will play, still framing it as computer assisted testing. I said that AI will rock my foundation, and it will. 




Wednesday, October 16, 2024

Not an ad but a review: Copado Robotic Testing

The world of low code  / no code tools is an exhausting one. I've looked at a few in last weeks, and will look at a few more. I don't try to cover them all, but I am trying to do a decent tour to appreciate the options. Currently my list of looking looks like this, in alphabetical order: 

  • Alchemy
  • Curiosity
  • Copado Robotic Testing
  • KaneAI
  • Leapwork
  • Mabl
  • SahiPro
  • Tricentis Tosca
  • UIPath
  • QF-Test
I am well aware that while my list is 10, the real list counts in hundreds. The space is overwhelmed with options. Overwhelmed enough that asking the real users of these tools their experiences, the conversation slides to DMs (direct messages). It feels a bit like dipping one's toes in hot water to start conversations in this space. 

Learning out in the open, today about Copado Robotic Testing, I accepted the kind offer to level-appropriate demo from a conversation started last week. Level-appropriate in the sense that people demoing were aware of what I know already and may avoid the shiny over the real. Thankful for Esko Hannula and Copado team for the very personalized experience. 

Before drilling more into what I learned, I must preface this for why am I writing about this. First of all, I am a great fan of great things we do in Finland. Copado Robotic Testing has roots in Finland, even if they, like any successful companies, outgrow geographical limits. Second, I am NOT a great fan of low code, and anyone showing me products in this space tends to fight an uphill battle on how they frame their messaging. 

I believe this:

Programming is like writing. Getting started is easy and it takes a lifetime to get good at. 

Visual programming languages tend to become constraints rather than leveling people up. Growing in space of a general purpose programming language tends to build people up. Growing in space of a visual programming language tends to constrain problems. Quoting a colleague from a conversation at Mastodon: 

All these things (orms, 4gl, dnd-coding, locode) seem to follow the paradigm of "make simple things simpler, hard things impossible and anything in between really tedious"

Kids graduate from Blockly in a few sessions by having aspirations too big for the box they are given. I am skeptical that adults wouldn't experience the same with low code. Drilling into the expectation that a user should be able to select between "Get text", "Get Web Text", "Get UI Text", "Get Electron Text" and "Find Text" based on whether the app is green screen, web application, desktop application, Electron desktop application or virtual desktop application is kind of stretching it. That is to say that the design of the simplification on the user perspective really matters and it is less than obvious looking into some of these tools. 

Enough of prefacing, let's talk about the tool. Copado Robotic Testing sits on top of Robot Framework. Robot Framework sits on top of Selenium. Selenium sits on top of Python. Anything aiming for low code would hide and abstract away most of this stuff. I mention it because 1) respecting your roots matters 2) stack awareness helps address the risks of moving parts. 

Robot Framework is test orchestration framework with libraries. Copado's approach starts with a few Robot Framework libraries of their own. A core one is called qWeb, and that core one is available as open source. 

If I take nothing else out of my hour of learning with them, I can take this: Robot Framework ecosystem has at least three options for libraries driving the web: Selenium library, Browser (playwright) library, and qWeb library. The third one has a lovely simplicity to it. User language. Reminds me how many versions of custom keywords on top of Selenium library I've seen over the years being called "work on our internal test framework" in every single project

Copado's product has a recording feature they call Live testing. On one screen, side by side you'd operate your web application, and watch script being built on top of the qWeb (and other qwords style) -libraries. Since this script is a Robot Framework script, I can see a way out with fear of dependencies of possible lifecycle changes on products as the script can be written too. It can be used independent of the product. And the part I appreciated particularly, saved up in version control and diffing nicely so that making sense of changes while I looked the other way can be made sense on for those of us with practices relying on it. 


Writing the scripts in Visual Studio Code with Copado's add-on was mentioned while not demoed. Two IDE homes with essentially different user groups is an approach I appreciate. Nice bridge between the way things are and the way things could be. 

We looked at running tests, reports over all the runs and the concept of running tests in development mode to not keep around results while developing tests. We discussed library support for iFrames and shadow DOM, and noted there's platform product specific libraries available in the product, namely qForce for SalesForce testing capturing some of that product's testing complexities. 

From a fresh memory of a day to sort out version dependencies in languages and libraries this week, I could appreciate the encapsulation of the environment you need to run tests from into the product. 

We also looked at current state of AI in the product, now called Agents. While I feel like assistant style features of generating tests from text or explaining code are a little less than word 'agents' would lead me to expect, it serves as a placeholder for growing within that box. Assisting AI well integrated into an IDE like this one is a feature we are learning to expect. and benefit from. 

Finally, we looked at a feature called Explorer. The framing of it being even lower code than the text-based Live Testing is something I can link with, yet framing the path we click and things we note as exploratory testing is something I would be careful with. The idea of hiding text from certain groups of users seems solid though. Nice bridge, even if not a bridge from exploratory testing to automation as I would think, but a bridge from business user testing to automation.

While Copado team opened the session with five things to preface test automation conversations with, I close with the same. Test environments, vendor lock in, Subject matter expert inclusion, Results sharing and Test maintenance definitely sound like problems to design against in this space. Copado team's choices on those made sense to me and show they have a solid background in this problems and solutions space.  

For me that was an hour well spent, and lead me into a further conversation on the Robot Framework libraries for driving web user interfaces. Lovely people, which I already knew. And a worthwhile tool to consider. 

Monday, October 14, 2024

Complexity draws us to low-code solutions

If there is one conversation I find myself having ever since I became a test consultant this June, it is the one of clarifying the space of test automation tool options. There are options. Lots of options. And it is not the easiest of all things to make sense into the options.

Even if I simplify the options to free options and browser testing, I face the conversation of the three: 

Selenium, Playwright, or Cypress?

You may guess it, the conversation even with these is less obvious than you'd think. Selenium is driving a standardization effort, which means that while it supports WebDriver Protocol now, it is already supporting WebDriver BiDi on some of the languages. That is, things are changing under the hood in ways many people in browser automation space will want to pay attention to. 

Watching things within the Playwright repo going on in pull requests, it looks like Webdriver BiDi is finding its way into Playwright too. And when that happens, it is an essential change on the overall landscape.

For now with using CDP, there is the definite need of regularly emphasizing that Chromium is not Chrome nor Edge. Webkit is not Safari. Playwright and Cypress, running on CDP don't do real cross-browser testing. The approximation may be sufficient. Single browser testing may be sufficient. But for now, you would need something based on WebDriver Protocol (like Selenium) if you wanted to automate for the browsers your users are using. 


To make matters more complicated, the three are not the only options. In the Selenium ecosystem for testing purposes, the recommendation is to use one of the frameworks built on top of it like Nightwatch or SeleniumBase or frameworks using WebDriver Protocol without using Selenium like WebdriverIO. Then again, using Selenium as the driver for any and many commercial tools is also an option, and you might not even know what powers the framework under the hood. There's layers. 


And features. 

However, when we talk about these tools, we don't talk about the layers or features. We talk about naming the tools, leaving it for the reader to figure this all out. 

The main benefit I find from low code platforms is their closed nature. You don't have to care what is outside the box and you have no control what is inside the box. It probably boxes in things you need to do testing. It simplifies the world. Instead of reading and making sense of all this and all the change to this, you can focus on the work of testing. 

Sometimes we argue about the tools so much that focus gets lost. We live with our choices long, and keeping things open has, in my perspective, more value than the simplification.