Friday, June 21, 2024

Memory Lane 2005: ISTQB Foundation Syllabus, Principles of Testing

Some people create great visuals. One of those that I appreciated today was to turn 7 principles of testing into a path around number 7, by Islem Srih. As my eyes were tracking through the image, a sense of familiarity hit me. This is the list of 7 I curated for ISTQB Foundation Syllabus 1st edition in 2005. The one I still joke about holding copyright to, since I never signed off my rights. ISTQB did ask, I did not respond. 

Also, it is a list that points to other people's work. Back then, as a researcher, I was collating not reimagining. Things I consider worth my time has changed since, and I would not contribute to ISTQB syllabi anymore.

Taking the trip back memory lane, I had to look at what I started with to build the 7 principles. I recognize editions since have changed labels - thing I called *pesticide paradox* to honor Boris Beizer's work is now "tests wear out".  I'm pretty sure I would have called Dijkstra's on the absence-not-presence principle as he is the originator of that idea, and I find it impossible to believe I would have penciled in [Kaner 2011] for something that is very obviously [Kaner 2001]. It is incorrect in the latest published syllabus. Back then I knew who said what, and had nothing to say myself as I imagined it was all clear already. Little did I know... 

The 7 principles that made the cut were ones where we did not have much of arguments. 



I have the 2005 originals that I was no longer able to find online easily. I can confirm I did not mess up the references, because I did not put in the references. Now that I look at this, the agreement was not to put in references unless they were recommended reading to complete the syllabus.
I also went back to look at principles I started working from to get to these 7. The older set of principles is from ISEB Foundation. 

Comparing the two

ISTQB:

  • Presence not absence
  • Exhaustive is impossible
  • Early testing
  • Defects cluster
  • Pesticide paradox
  • Context-driven
  • Absence of errors fallacy
ISEB

  • Exhaustive testing is usually impractical
  • Testing is risk-based and risk must be used to allocate time
  • Removal of faults potentially improves reliability
  • Testing is measurement of quality
  • Requirements may determine the testing performed
  • Difficult to know how much is enough
It was clear we did not agree on the latter. While these may be principles that are much better than X normally sees as per one of the comments on the post that inspired me to take this memory lane trip, I am no longer convinced they are truly the principles worth sharing. But they do make rounds. I am also not convinced the additions and explanations improved them, at least on their generalized nature. 

In hindsight, I can see what I did there. Grounded the syllabus a bit more on things in research. Some of which I don't think we've properly done to this day (early testing for example - jury is still out; clustering of defects feels more like a folklore than principle; pesticide paradox more of an encouragement to explore than tests actually becoming ineffective).

Perhaps revisiting what is truly true would now be good. Now that I no longer think my greatest contribution to testing is to know what everyone else says and not say anything myself. 

Two Years with Selenium Project Leadership Committee

Today I am learning that I am not great - even uncomfortable - using voice of an organization. And for two years, I have held a small part of voice of an organization, being a member of the Selenium Project Leadership Committee. I have been holding space and facilitating volunteer organizations for over three decades, and just from my choice of words you note that I don't say I have been leading or running or managing even those probably would be fair words for other people to assign watching the work I do.

Volunteer organizations are special. It is far from obvious that they stay around and active for two decades, like Selenium project has. It is far from obvious they survive the changes of leadership, but for an organization to outlive people's attention span, it is necessary. It is also far from obvious that they manage to navigate the complex landscape of individual and corporate users, their hopes and expectations, other collaborators on solving similar problems, and create just enough but not too much governance to have that voice of an organization. 

If I was comfortable using voice of an organization, I would not be nervous speaking in Selenium Conference 2024 today with Diego Molina and Puja Jagani on 'State of Union' keynote. Similarly, I might find it in me to write a post in the official Selenium Blog since I even have admin accesses to Selenium repos on GitHub. But it's just so much easier to avoid all the collaboration and write in my own voice and avoid the voice of an organization that I always hold in then back of my mind - or my heart. 

In 2018 Selenium Conference India I was invited to keynote, and I chose to speak about Intersection of Automation and Exploratory Testing. Back then I did not know I would end up combining the paths under the flag of Contemporary Exploratory Testing and I most definitely did not know that the organizers inviting me to keynote would bring me to an intersection where my path aligned with the Selenium Project. 

In 2018 I keynoted. A little later, I volunteered with program committee reviewing proposals, again invited by the people already in. And in 2022, I ended up with Selenium Project Leadership Committee (PLC), again on an invite. 20 years of Selenium now in 2024 marks 2 years in PLC for me. 

Today I am wondering what do I have to show for it. And like I always do when I start that line of thinking, I write it down. With my voice. 


Making sense to what PLC (project leadership committee) is in relation to TLC (technical leadership committee) wasn't an easy one to figure out. There's the code, packaging that to releases and the related documentation and messaging, and all of that is lead by TLC. If there is no software, there is no software project. Software that does not change is dead, and in two years I have reinforced the understanding that Selenium is not dead, or dying. It's alive, well, and taking next steps in fairly complex world of cross-browser standards-based agreements to then implement something everyone, not just the project's immediate web driver implementations use. Selenium project is the pioneer and collaborator in building up the modern web. 

The first thing I have to show for two years of Selenium PLC is to understand that what Selenium is. That is many things. It is multiple components to this entire ecosystem. It's working for standards to unite browsers. It's community of people who find this interesting enough to do this as a hobby, volunteering, unpaid. It's where we grow and learn together. It's taking pride in inspiring other solutions in browser automation space whether they are built on top of Selenium or if there's other choices than trying to get real browsers continue on their path of differentiation but still allow for automating real browsers with a unified web driver library. 

What have I been doing in the project then, other than making sense of it? 

  • Realizing there is a lot of emails people want to send to "Selenium". There's a lot of requests on privacy and security assessments that organizations using Selenium think they can just ask "the company" - where there is no company. There's a lot of proposals for collaboration, especially in marketing. And there's donations / sponsoring - because even a volunteer project needs money to pay for its tools and services. 
  • Fighting against becoming the "project assistant". Getting people to get together, someone needs to schedule it. Calendars as self-service aren't a thing everywhere in the busy tech world. After fighting enough personal demons, I did end up putting up a calendar invite and taking up the habit of posting on slack after every conversation to share what the conversation was about. Seems that bit is something I have enough routine to run with. 
  • Picking and choosing what/how I want to contribute: I wanted to move from face to face conferences and online broadcasts to online conversations, and set up Selenium Open Space Conference for it. I loved the versatility of sessions, and particularly the one person showing up with his own open source project he would walk different people through in various sessions. I also loved learning about how Pallavi Sharma ended up publishing three books on Selenium, and what goes on in the background of authoring books. 
  • Picking and choosing what/how I want to contribute: setting up micro sponsorship model and starting steps towards full financial transparency. I will need other two years to complete the full financial transparency, but that is something I believe in heavily. I want it to be normal to say that Selenium project holds $500k on Software Freedom Conservancy. You can dig out that info from their annual reports. But I also want to say that in addition to what is held there, it would all be visible on Open Collective. We have raised $1,684.49 since we started off with the Selenium community money, and there's many steps to take to move host of that money to be Selenium/BrowserAutomation Inc with every transaction transparently visible. 
  • Fiscal hosting ended up being something I specialize in. We have five people in PLC, where two of them do double duty with TLC, and every one of us holds paying job with relevant levels of responsibility in different companies. Within the two years we had also two more people in PLC, Bill McGee and Corina Pip, brilliant folks we now hold space to become free from other engagements to rejoin our efforts, after some fiscal hosting / governance stuff is better sorted. Knowing where you are is start of sorting it out. 
  • Setting up Selenium in Mastodon. We could really use someone taking care of Selenium in Social Media overall, but I did create a placeholder for content even if my content-producing with voice of organization is sporadic at best. 
  • Introducing a new class of sponsors, where companies sponsor with allocating people to contribute significant work time to project and be recognized for it. Not like I did this alone, but working in a user organization rather than vendor organization, I just happen to be able to hold space for something like this by positioning. 
  • Dealing with people. Some of that is overwhelming, but at the same time necessary. People have ideas, and sometimes the ideas need aligning. Mostly I listen but I would also step in to mediate. And have been known to do so. 
If there is a future I have an impact on for Selenium,  it will be: 
  1. Diverse Community Centric and focused on Collaboration. 
  2. Fiscally transparent and easy to access for taking stuff forward. 
  3. Worthwhile for users and contributors - individual and corporate. 
Some days - most days - I feel completely insufficient with the amount of time I can volunteer. But it adds up. And it will add up more. The scale in which it matters never ceases to amaze me. 2.5M unique monthly users. Real polyglot support with bindings for python, java, .NET, javascript, ruby, php... 16M Python downloads. 98.5M Java downloads. 

It takes a village, and I am showing up with you all, inviting you along. 

Thursday, June 20, 2024

Good Enough Quality Is Taking a Dip

A colleague in the community was reporting on her experience teaching tech for a group of kids, and expressing how tech makes it hard to love tech sometimes, and her post pushed me to think about it. This is the experience a lot of us have. 

We want to show a newbie group on how to run with a lovely new tool, only to realize that to run the new tool, we first have to go round whatever troubles come our way. Some of the troubles come from the differences of our environments. Some from the fact that no one updates and maintains some computers. Some from the choices of features (ads in particular) that get in the way. It's not enough if the thing we're building works, but the operating system underneath and even antivirus running in the background are equally part of the users' experience. 

Those of us teaching in tech have found workarounds for environmental differences, my favorite being to work on either my computer through ensemble programming, or if I want to teach newbies for real, working on their computers and making time to solve real life problems they will face for things being different. 

Quality, while good enough, is currently not great. And it has not only been getting better in the recent years, even if some people speak of AI in terms of it being today the worst it will ever be. 

Looking at AI-enabled products, very few of them are really making things better. 

I was purchasing a lot of books during my vacation time from Amazon, and seeing the AI-generated summary posts on top of reviews wasn't helping me with my selections. I went for #booktok as AI renders the first recommendation pretty much useless to me - I don't want summary, I want to find someone like me. 

Trying out new AI-powered search from Google wasn't producing better results, and I am pretty sure they too know that the AI-powered one is doing worse than the classic one. 

Apple just announced introducing - finally - some AI-powered features with chatGPT integration, looking like a placeholder to grow worthwhile features in, but other than that giving appearance of little usefulness to the product. 

Everyone seeks uses of AI. Experiments with AI. I do too. I would still report than in the last two years of actively having used AI-tools, it has not added to my productivity, but it has added sometimes to my quality (I'm really good at not being happy with AI's results) and it has added to my fun and enjoyment. 

Current uses of AI are driving down experience we have on good enough quality. To include a placeholder for AI to grow - assuming it is only getting better - we make quite significant compromises in the value our products and services provide. The goal for AI-placeholders is not to make things better today. It is to make space to make things better soon, and meanwhile we may take a dip in the experience of relevance for users. 

For years, I have come to note that truly seeking good quality is not common. Our words to explain the level we are targeting or achieving are hidden behind the fuzzy term "quality". Even bad quality is quality. Making things fun and entertaining is quality. 

Our real target may be usefulness and productivity. Being able to do things with tech we weren't able to do before, and being able to do things we really need to do. While we find our way there with GenAI, we're going to experience a dip on good enough quality to allow for large scale experimentation on us.  

 

Thursday, June 6, 2024

GenAI Pair Testing

This week, I got to revisit my talk on Social Software Testing Approaches. The storyline is pretty much this: 

  • I was feeling lonely as the only tester amongst 20 developers at a job I had 10 years ago. 
  • I had a team of developers who tested but could only produce results expected from testing if they looked at my face sitting next to them, even if I said nothing. I learned about holding space. 
  • I wanted to learn to work better on the same tasks. Ensemble programming became my gateway experience to extensive pair testing beyond people I vibe with, and learning to do strong style pairing was transformative. 
  • People are intimidating, you feel vulnerable, but all of the learning that comes out is worth so much.
As I was going through the preparations for the talk, I realized something has essentially changed since I delivered that talk last time. We've been given an excuse to talk to a faceless, anonymous robot in form of generative AI chatbots. The success I had described as essential to strong-style pairing (expressing intent!) was now the key differentiator on who good more out of the tooling that was widely available. 

I created a single slide to start the conversation on something that we had been doing: pairing with the application, sometimes finding it hard to not humanize a thing that is a stochastic parrot, even if a continuously improving predictive text generator. Realizing that when pairing with genAI, I was always the navigator (brains) of the pair, and the tool was the driver (hands). Both of us would need to attend to our responsibilities but the principle "an idea from my head must go through someone else's hands" is a helpful framing.


I made few notes on things I at least would need to remember to tell people about this particular style of pairing. 

External imagination. Like with testing and applications, we do better when we look at the application while we think about what more we would need to do with the application, genAI acts as external imagination. We are better at criticizing something when we see it. With a testing mindset, we expect it to be average, less than perfect, and our job is to help it elevate. We are not seeking boring long clear sentences, we are seeking the message core. We search boundaries, arguing with the tool on different stances and assumptions. We recognize insufficiency and fix it with feedback, because average of the existing base of texts is not *our* goal. We feel free to criticize as it's not a person with feelings, taking possibly offense when we are delivering the message on the baby being ugly. We dare to ask things, in ways we wouldn't dare to ask from a colleague. We're comfortable wasting our own time, but uncomfortable taking up space of others days. 

Confidentiality.We need to remember that it does not hold our secrets. Anything and everything we ask is telemetry, and we are not in control of what conclusions someone else will draw on our inputs. When we have things we would need to say or share that we can't share outside our immediate circle, we really can't post all that for these genAI pairs. But for things that aren't confidential, it listens to more than your colleague in making its summaries and conclusions. And there is an option of not using the cloud-based services but hosting a model of your own from Hugging Face, where whatever you say never leaves your own environment. Just be aware. 

Ethical compensations. Using tools like this changes things. The code generation models being trained on open source code change the essential baseline of attribution that enables many people to find things that allow them the space of contributing to open source. These tools strip names of people and the sense of community around the solutions. I strongly believe we need to compensate. At my previous place we made three agreements on compensation: using our work time on open source; using our company money to support open source; and contributing to body of research not the change these tools bring about.  Another ethical dilemma to compensate for is energy consumption of training models - we need to reuse not recreate, as one round of training is said to take up energy equivalent of car trip to moon and back. While calling for reuse, I am more inclined to build forward the community-based reuse models such as Hugging Face over centralizing information to large commercial bodies with service conditions giving promises what they will do with out data. And being part of underrepresented group in tech, there's most definitely compensations needed on the bias embedded in the data to create world we could be happy with. 

Intent and productivity. Social software testing approaches have given me ample experiences of using tools like this with other people, and seeing them in action with genAI pairing. People who pair well with people, pair better with the tools. People who express intent in test-driven development well and clearly, get better code generated from these tools. The world may talk of prompt engineering but it seems to be expressing intent. Another note is on the reality of looking at productivity enhancements. People insist they are more productive, but a lot of the deeper investigations show that there is creative bookkeeping involved. Like fixing a bug takes 10 minutes AFTER you know exactly which line to change, and your pipelines just work without attending. You just happen to use a day in finding out the line to change, and another on care of the pipeline on whatever random happened on it being on your watch. 

These tools don't help me write test cases, write test automation and write bug reports. They help me write something of my intent and choosing which is rarely test case or automation, they help me summarize, understand, recall from notes, ideate actions and oracles, scope to today / tomorrow - this environment / the other - the dev / the tester, prioritize and filter, and inspire. They help me more when I think of testing activities not as planning - test case design - execution - reporting, but as intake - survey - setup - analysis - closure. Writing is rarely the problem. Avoiding writing and discovering information that exists or doesn't, that is more of a problem. Things written are a liability, someone needs to read them for writing them to be useful. 

Testing has always been about sampling, and it has always been risk-based. When we generate to support our microdecisions, we sample to control the quality of outputs. And when things are too important to be left for non-experts, we learn to refer them to experts. If and when we get observability in place, we can explore patterns after the fact, to correct and adjust. 

I still worry about the long term impacts of not having to think, but I believe criticizing and thinking is now more important than ever. The natural tendency of agreeing and going with the flow has existed before too. 

And yes, I too would prefer AI to take care of housekeeping and laundry, and let me do the arts, crafts and creative writing. Meanwhile, artificial intelligence is assisting intelligence. We could use some assisting day to day. 

Friday, May 31, 2024

Taking Testing Courses with Login

When we talk about testing (programming) courses we've taken, I notice a longing feeling in me when I ask about the other's most recent experience. They usually speak of the tool ("Playwright!", "Robot Framework with Selenium and Requests Library!", "Selenium in Java!") whereas where I keep hoping they would talk about is what got tested and how well. From that feeling of mine, a question forms: 

What did the course use as target of testing in teaching you? 

For a while now, I have centered my courses around targets of testing, and I have quite a collection. I feel what you learn depends on the target you test. And all too many courses leave me unsatisfied with the students with their certificates of completion, since what they really teach is operating a tool, not testing. Even for operating a tool, the target of testing determines the lessons you will be forced to focus on. 

An overused example I find is a login page. Overused, yet undereducated. 

In its simplest form, it is this idea of two text fields and a button. Username, password, login. Some courses make it simple and have lovely IDs for each of the fields. Some courses start of making locators on the login page complicated so clicking them takes a bit of puzzle solving. In the end, you manage to create test automation for successful and unsuccessful login, and enjoy the power of programming at your fingertips - now you can try *all the combinations* you can think of, and write them down once into a list. 

I've watched hundreds of programmed testing newbies with shine in their eyes having done this for their first time. It's great, but it is an illustration of the tool, it's not what I would expect you to do when hired to do "testing". 

Sometimes they don't come in the simplest form. On a testing course targets, the added stuff scream education. Like this one. 


From a general experience of having seen too many logins, here's things I don't expect to see in a login and it's missing things that I might expect to see if a login flow gets embellished for real reasons. If you're take on automating something like this is that you can automate it, not that it has stuff that never should be there in the first place, you are not the tester I am looking for. 

Let's elaborate the bugs - or things that should make you #curious like Elizabeth Zagroba taught on her talk at NewCrafts just recently. You should be curious on: 

  • Why is there a radio button to log in as admin vs. user, and why is Admin the default? There are some but very few cases where the user would have to know and asking that in a login form like this is unusual at best, but also only the minority users who are both would naturally have a selection like this. For things where I could stretch my imagination to see this as useful, the default would be User. The judgmental me says this is there to illustrate how to programmatically select a text box
  • Why is there dropdown menu? Is that like a role? While I incline to think this too is there to illustrate how to programmatically select from list I also defer my judgement to the moment of login in. Maybe this is relevant. Well, was not. This is either half of an aspired implementation or there for demo purposes. And it's missing label explaining it, unlike the other fields. 
  • Why is there terms and conditions to tick? I can already feel the weight of the mild annoyance of having to tick this every single time, with changing conditions hidden in there, and you promising your first borne child is yet another Wednesday some week. The judgmental me says this is here to show functional problem of not requiring ticking it when testing. And the judgmental me is not wrong, login works just fine without what appears to be compulsory acceptance of terms, this time with default off to communicate higher level of committing when I log in. 
The second level judgement I pass upon people through this is that testers end up overvaluing being able to click when they should focus on needing to click and waste everyone's time with that and this is a trap. I could use this to rule out testers except overcoming this level of shallowness can be taught in such a short time that we shouldn't gatekeeper on this level of detail. 

I don't want to have the conversation of not automating this either. Of course we automate this. In the time I am writing this, I could already have written a parametrized test with username and password as input that then clicks the button. However, I'd most likely not care to write that piece of code. 

Login in a concept of having authentication and authorization to do stuff. Login is not interesting in its own right, it is interesting as a way of knowing I have or don't have access to stuff. Think about that for a moment. If your login page redirects you to an application like this one did, is login successful? I can only hope it was not on the course I did not take but got inspired on to write this. 

I filled in the info, and got redirected on a e-store application. However, application URL and another browser, I get to use the very same application without logging in. I let our a deep sigh, worried for the outcome of the course for the students.

Truth be told, before I got to check this I already noted the complete absence of logout functionality. That too hinted that the login may be an app of its own for testing purposes only. Well, it does illustrate combinations you can so easily cover with programmatic tests. What a waste.  

What work in projects around login looks like, really? We can hope it looks like taking something like Keycloak (an open source solution in this space), styling a login page to look like your application, avoiding the thousands of ways you can do login wrong. You'll still face some challenges but successful and failing login aren't the level you're expected to work on. 

What you would work on with most of your programmatic testing is the idea that nothing in the application should work if you aren't authorized. You would be more likely to automate login by calling an API endpoint giving you a token you'd carry through the rest of your tests on the actual application functionality. You'd hide your login and roles into fixtures and setups, rather than create login tests. 

The earlier post I linked above is based on a whole talk I did some years back on the things that were broken in our login beyond login. 

Learn to click programmatically, by all means. You will need it. But don't think that what you were taught on that course was how to test login. Even if they said they did, they did not. I don't know about this particular one, but I have sampled enough to know the level of oversimplification students walk away with. And it leads me to thinking we really really would need to do better in educating testers. 

Tuesday, May 28, 2024

Compliance is Chore, not Core

It's my Summer of Compliance. Well, that's what I call the work I kicked off and expected to support even when I switch jobs in the middle. I have held a role of doing chores in my team, and driving towards automating some of those chores. Compliance is a chore and we'd love if it was a minimized to invisible, while producing the necessary results. 

There is, well, for me, three compliance chores. 

There's the compliance to company process, process meaning this is required of development in this company. Let's leave that compliance for another summer. 

Then there's the two I am on for this summer, open source license compliance and security vulnerability handling. Since a lot of the latter is from managing the supply chain of other people's software, these two kind of go hand in hand. 

You write a few line of code of your own. You call a library someone else created. You combine it with an operating system image and other necessary dependencies. And all of a sudden, you realize you are distributing 2000 pieces from someone else. 

Dealing with the things others created for compliance is really a chore, not core to the work you're trying to get done. But licenses need attending to and use requires something of you. And even if you didn't write it, you are responsible for distributing other people's security holes. 

Making things better starts with understanding what you got. And I got three kinds of things: 

1. Application-like dependencies. These are things that aren't ours but essentially make up the bones of what is our application. It's great to know that if your application gets distributed as multiple images, each image is a licensing boundary. So within each image with application layer, you want to group things with copyleft (infectious license) awareness. 

2. Middleware-like dependencies. These are things that your system relies on, but are applications of their own. In my case, things like rabbitMQ or keycloak. Use 'em, configure but that's it. But do distribute and deploy, so compliance needs exists. 

3. Operating system -like dependencies. Nothing runs without a layer in between. We have agreements on licensing boundary between this and whatever sits on top. 

So that gives us boundaries horizontally, but also vertically in a more limited degree. 

Figuring out this, we can describe our compliance landscape.


The only format this group in particular redistributes software is executables (the olden way) and images (the new way). Understanding that. these get built up was part of the work to do. We identified inputs, outputs we would need and impacts we seek on us having to create those outputs. 

I use the landscape picture to color code our options. Our current one "scripts" takes source code and dependencies as input, ignores base images and builds a license check list and license.txt - with a few manual compliance checks on knowing what you seek in the patterns of license change. It's not hard work, but it's tedious. Fails for us on two of the impacts -- we do chore work to create and maintain the scripts, unable to focus on core; requires manual work every single time. 

We're toying with two open source options: ORT (Open Source Review Toolkit) that shows promise to replace our scripts, and possibly extend to image based scans as its open source project. Does not really come best wrapped as service. Syft+Grype+GoTemplates that seems to some of the tricks, but leaves things open in the outputs realm. 

And then we're toying with an open source service offering, where money does buy you a solution, with FOSSA. 

I use the word toying as an uncommittal way of discussing a problem I have come to understand in last weeks. 

Running a compliance scan for base images, there is significant differences in numbers of components identified with Syft vs. Docker SBOM vs. Docker Scout vs. self-proclaimed at the source. There's quality assessment tool for SBOMs that does checking for many other things but not correctness showing significant other differences. And that is just the quality of the SBOM piece of the puzzle. 

We started off with the wrong question. It is no longer a question if we are "able to generate SBOMs" but instead we are asking: 
  • should we care that different tools provide different listing for same inputs, as in, are we *really* responsible for quality or just the good faith effort 
  • how we should make those available next to you our releases 
  • how we scale this in an organization where compliance is chore not core 
 This 'summer of compliance' is forcing me to know more of this than I am comfortable with. When quality matters, it becomes more difficult. *If* it matters. 

Saturday, May 25, 2024

To 2004 and Back

Year 2004 was 20 years ago. Today I want to write about that year, for a few reasons. 

  • It's the first year in which I have systematically saved up my talks on a drive
  • It's the year when I started doing joined talks with Erkki Pöyhönen and learned to protect access to materials by using Creative Commons Attribution -license
  • I have just finished taking my 2004 to my digital legacy project, meaning my slides from that year are no longer on my drive but available with Creative Commons Attribution -license on GitHub.  

To put 2004 in context, I was a fairly new senior tester then. I became a senior because I became a consultant. There is no chance in hell I was actually a senior then. But I did wonders with the appreciation and encouragement of the title. 


My topics in 2004 were Test automation, Test Planning and Strategies, Test Process Improvement, and Agile Testing. Most of my material back then was in Finnish. I believed that we learn best on our native language. I was drawing from what was discussed in the world that I was aware of, and clearly I was aware of Cem Kaner. 

Looking at my materials from then, I notice few things:

  • Most of my effort went into explaining people here what people elsewhere say about testing. I saw my role as a speaker as someone who would help navigate the body of knowledge, not in creating that body of knowledge. 
  • The materials are still relevant and that is concerning. It's concerning because same knowledge needs still exist and not enough has changed. 
  • I would not deliver any of the talks that I did then with the knowledge and understanding I have now. My talk on test automation was on setting up a project around it, which I now believe is more of a way of avoiding it than doing it. Many people do it with a project like that, and it's a source of why they fail. My talk on concepts in planning and process make sense but carry very little practical relevance to anything other than acquisition of vocabulary. 
  • My real work reflected in those slides is attributed to ConformiQ, my then employer. I did significant work in going through Finnish companies assessing their testing processes, and creating a benchmark. That data would be really interesting reflection on what testing looked like in tens of companies in Finland back then, but it's proprietary except for the memories I hold that have since then impacted what I teach (and learn more on). 
I talked about Will Testing Be Automated in the Future and while I did not even answer the question in my first version of the talk in writing, second one came with my words written down:

  • Automation will play a role in testing, especially when we see automation as something applicable wider than in controlling the program and asserting things. 
  • Only when it is useful - moving from marketing promises to real substantial benefits
  • Automation is part of good testing strategy
  • Manual and automated aren't two ways of executing the same process, but it's a transformation of the process
None of this is not true, but it is also missing the point. It's what I, knowing now what I did not know then, call feigned positivism. Words sounding like support, but are really a positive way of being automation aversive. It's easy to recognize feigned positivism when you have changed your mind, but it's harder to spot live. 

Being able to go back and reflect is why I started this blog back in its day. I expected to be wrong, a lot. And I expected that practicing being wrong is a way of reinforcing a learning mindset. 

That one still holds. 2004 was fun, but it was only one beginning.