Saturday, February 8, 2025

Evolving scoring criteria for To Do App for recruiting

I have used various applications for assessing people's (exploratory) testing skills in recruiting. While it's already a high-stress scenario, I feel there needs to be a way to *test something* if that is the core of the job you'd get hired for. I may believe this because I too, was tested with a test of testing 28 years ago. 

Back when I was tested, the application was two versions of Notepad, one the original English and the other a localized version with seeded bugs. My setting for doing the test then was one hour in front of a computer in a classroom, being observed. There were 8-12 of us in total for each scheduled session, we were all nervous, did not know each other and most likely never even introduced ourselves. We did our thing, reported discrepancies we identified as clearly as we could, and got a call back or not. I remember they did these tests in scale. The testing teams we had weren't small, and we were all total newbies. 

This week I assigned To Do app as the thing to test. For newbies, I make it a take home assignment and recommend using under two hours. For experienced testers, I spend half an hour face to face out of the max two hours of interviewing time we allocate. The work of people is not back yet, but the work of me looking at the application myself got done. 

The most recent form of this take home assignment is one where I give two implementations of To Do App, and explain they are what developers use to show off their best practice for applying front end frameworks.
  1. Elm: https://todomvc.com/examples/elm/
  2. Angular: https://todolist.james.am/#/

I ask for outputs:

  • A clearly catalogued listing of identified issues you’d like to give as feedback to whoever authored the version
  • Listing of features you recognize while you test
  • Description of how you ended up doing the assignment
  • (optional) example of test automation in language and framework of your choice
The previous post listed the issues I am aware of, and today I created a listing of scoring the homework.  In case you have your version, or would like to talk about mine. I am still hoping for the day when some of the people doing the assignment *surprise me* by reading about how to approach these exercises from my blog. 

I can't expect complete newbies to get to all, but what worries me the most is that too many of seasoned testers don't even scratch the surface. We still expect testers to learn testing by testing, often without training or feedback. 

To Do Application -Assessment Grid

ESSENTIAL INSIGHTS
Architecture: frontend only
Same spec for both implementations
Material online to reuse
Reading the room, clarifying assumptions
Optional is chance to show more
Presenting your work is not just description of doing

ESSENTIAL ACTIONS
Research: find the spec as it is online
Research: ask questions
Meta: explain what and why you do
Learning: showing something changed in knowledge while testing
Bias to action: balance explaining and results
Modeling function, data, environments
Recognizing tools of environment
Choosing a constraint to control perspective
Stopping criteria: time or coverage
Classifying and prioritizing
Clarity of reporting issues
Reporting per implementation and common for both
TL;DR - expect lazy readers
Using and explaining a heuristic
Awareness of classes of data (e.g. naughty strings)
Surprise me (e.g. screenshot to genAI)

RESULTS
Functional problems (e.g. off by one count, select all, tooltip)
Functional problem, browser dimension (e.g persistence, icon corruption)
Usability problems (e.g. light colors, lack of instructions)
Implementation problems (on console) e.g. messages in code and errors in console
Data-related problems: creating empty items
Data-related problems: trim whitespace
Data-related problems: special characters
Missing features (e.g. order items)
Typos
In-app consistency (e.g. always visible functionality that does not always work)

AUTOMATION
Working with selectors
Reading error messages
Scenario selection
Locators
Structure and naming
Describing choices
Readme for getting tests to run

MISTAKES THAT NEED EXPLAINING
Overfocus on locators while application is unknown and automation is not in play
Wanting to input SQL injection string

I ended up cleaning this all and making it available at GitHub: https://github.com/exploratory-testing-academy/todoapp-solution 

Monday, February 3, 2025

That Pesky ToDo app

While I am generally of the opinion that we don't need injected problems on applications that are already target rich enough as is, today I went for three versions of a well-known test target problem, namely the ToDo MVC app

Theoretically this is where a group of developers show how great they are at using modern javascript frameworks. There is a spec defining the scope, and the scope includes requirement for having this work on Modern browser (latest: Chrome, Firefox, Opera, Safari, IE11/Edge). 

So I randomly sampled one today - the Elm version, https://todomvc.com/examples/elm/

I took that one since it looked similar to in styles to what playwright uses as their demo, https://demo.playwright.dev/todomvc/, while the latest react version already has the extra light styles updated to something that you are more likely to be able to read. 

I also took that one since it looked similar to the version flying around as target of testing with intentionally injected bugs, https://todolist.james.am/.

My idea was simple: 

  • start with app, to explore the features
  • loop to documenting with test automation
  • switch over implementations to see if automation is portable over various versions of the app
I had no idea of the rabbit hole I was about to fall into. 

The good-elm-version was less good than I expected: 
  1. Select all does not work
  2. edit mode cannot be escaped with esc
  3. unsaved new item not removed on refresh
  4. edit to empty leaves the item while it should be removed
  5. edit to empty messes the layout and I should not see it since 4) should be true
So I looked at the good-latest-react version, only to learn persistence is not implemented. 

And that is where the rabbit hole went deep. I researched the project materials a bit, and explored the UI to come up with an updated list of claims. The list contains 40 claims. That would let me know that good-elm-version was 90% good, 10% not good. 


Looking at the bugs seeded version, there's plenty more to complain: 

  1. Typos, so many typos: need's in placeholder, active uncapitalized, toodo in instructions
  2. "Clear" is visible even when there are no completed items to clear
  3. "Clear" does not clear, because it is really "Clear completed"
  4. Counter has off by one error
  5. Placeholder text vanishes as you add an item, but returns on refresh
  6. Sideways a as icon for "mark all us complete" is not the visual I would expect, nor is the A with ~ on top for deleting - on chrome, after using it enough, but the state normalized on forced refresh. 
  7. Select all does not unselect all on second click
  8. Whitespace trim is not in place if one edits items with whitespace, only when items are shown
  9. <!-- STUPID APP --> in comments is probably intentionally added for fun
  10. ToDo: Remove this eventually tooltip is probably added for fun as well
  11. Errors on missing resources on console are probably added for fun too
  12. "Clear" is missing the counter after it the spec asks for
  13. usability with clear completed, since its functionality only works on filters all and completed, does it really need to be visible on the active filter
  14. URL does not follow the technology pattern you would expect for the demo apps. 
In statistics of the listing of features though, the pretty listing of capabilities is hard to map with the messiness of issues: 

✓ should show placeholder text
✓ should allow to clear the completion state of all items
✓ should trim entered text
✓ should display the current number of todo items
✓ should display the number of completed items
✓ should be hidden when there are no items that are completed
✓ should allow for route #!/
 
7/40 (17,5%) does not feel like it is essentially worse but then again, there are many types of problems that the list of functional capabilities does not lead one to. 

There is also usability improvement conversation type of feedback, that is true for both the two versions. 
  1. The annoyingly light colors where seeing the UI and instructions is hard
  2. None of these allow for reordering items and it feels like an omission even if intentional
  3. None of these support word wrapping
  4. usability of concepts "active" and "completed" for to do items is a conversation: are there better words that everyone would understand more clearly? 
  5. usability with a mouse, there's no adding with a mouse even if that feels by design
  6. usability of the whole design of router / filter concept can be confusing, as you may have a filter that does not show the item you add
  7. Stacked shadow effect in the bottom makes it seem like there are multiple layers. This does not connect with the filters / routing functionality well. 
  8. Delete, edit and select all options take some discovering. 

You could also compare to what you get from a nicely set up demo screenshot of the bugged version. 





The pesky realization remains: seeding bugs is unfortunately unnecessary. While I got "lucky" with elm-version's four bugs, I also got lucky with the refactored react version that is missing implementation of persistence. 

There's also an idea that keeps coming up with experienced testers that we really need to stop throwing at random: SQL injections. For a frontend only application without database, it makes so little sense unless you can continue your story with an imagined future feature where local storage of json gets saved up and used with an integration. Separating things true now and risks for future are kind of relevant in communicating your results. 

Playing more with the automation is left for another day. The 9 tests of today were just scratching surface, even if they 100% pass on playwright practice version and don't on any of the others. 


Saturday, January 4, 2025

Framing pain to gratefulness

Today, I shed a few tears for feelings I needed to let out. Processing those feelings today, giving them the time box they needed was not sufficient but I felt the need of writing about them. 

I was in my feelings of pain because I made the final calls for choosing who are the four lucky people who get awarded SeleniumConf Valencia 2025 full scholarships including international travel, accommodation and conference tickets. I know I should be happy for the four, but today I am feeling the pain for the 269 that got listed but received a No. 

I volunteer with the Selenium Leadership Committee. This year I was trying very hard not to volunteer with SeleniumConf which is our flagship event, but some things are just too important to not show up for, especially if absence risks them. These scholarships are one of those things for me. 

I would like to see a world where all conferences set up a few free places, with or without travel included to change the face of the industry in the conferences. It does not happen from a great and worthwhile idea, but it needs someone doing the work. My tears today were part of that work because the work is not easy. Not doing it is so much easier. 

Selenium is a community that has been founded and built with a leadership that understands diversity needs work. I joined PLC because I saw that before my time in action. Talent is distributed equally, opportunity is not. And opportunities can and should be created to balance. 

The scholarships have been a part of SeleniumConf concept for some time now, and we do it for bringing participants in for free in hopes of building them up to speakers and contributors in the world. Last summer we also started another form of scholarships, which is for speaking of Selenium with underrepresented voices in conferences other than Selenium. 

I feel the pain of organizing 273 brilliant professionals in need of an opportunity, reducing the selection in the end to disabled, black, gay and women. I feel the joy for the lucky four that wouldn't exist without my pain. And working through that pain, I remember again to frame this with gratefulness. 

In other communities, I would have first needed to fight for such budget to exist. Selenium project already knows this is necessary. I'm grateful this is a routine we go through. 

The work for opening doors needs to be distributed. I am grateful I am in positions where holding the door is possible for me. 

With that said, I am looking forward to meeting the four great people that ended up on top of the shortlist. They will make my conference time just a little bit more joyful, and you all joining will be able to meet them as participants just like everyone else. Introducing them to everyone else participating, without the association to underrepresentation that opened the door, will be part of what makes my socially awkward form of extroversion in conferences a little easier. 

The change I want to see requires work. If it is a change you want to see, please volunteer for the work. I am happy to pass on the torch in a community that already holds space for it. 


Thursday, January 2, 2025

Socializing requirements

There's an article making rounds in the Finnish software communities about the greatness of low code platforms. As the story goes, a public servant learned to create a system he had domain expertise in on the side of his job, and the public organization saves up a lot of money. 

The public servant probably would not have learned a higher code tooling to do this, but did manage to build a LAMP stack application with one of the low code tools. 

The conversation around this notes the risks of maintenance - whether anyone else can or will take the system forward, but also the insight that a huge part of building software is communicating domain expectations between people with different sets of knowledge. The public servant explaining what the system should be like to someone who could use tools to build something like this would have probably been a project effort in its own scale. 

The less people we can have to complete the full circle of sufficient knowledge to build something in a relevant timeframe, the easier it is. Some decades in the domain and intricate details of where the pains and benefits lie most likely helped with the success. 

There are days when I wish I could just stop communicating with others, trying to explain what the problem we are solving is, and just solve and learn it myself. Those are days when I refer to myself as a #RegretfulManager. Because working in a contained scale with less people in the bubble is easier, progress feels faster and it's really easy to work on an illusion that value to me is value for everyone, and that I did not miss out anything for security, maintenance or impacts to people who aren't like me. 

---

Another article making rounds in the Finnish software communities is on delivering a system with some hundreds of requirements, and then having a disagreement on who is responsible for the experience of finding a lot of things missing or incorrect as per expectation interpreted. The conversation with that one makes the point the more complete interpretation of a requirement is the requirement when there is room for interpretation. 

The conversation of interpretations continues in court, even if it currently dwells in the articles. And we'll eventually learn agreements constraining parties in making their interpretations, and being in court everyone is already failing. 

---

Over the years of working with requirements from a testing perspective, I have come to learn a few things these articles making rounds nicely illustrate: 

  • Just like test plans aren't written but socialized, requirements are not written but socialized. Interpreting is socializing. And examples - describing what is (descriptive) are complete requirements - describing what should be (prescriptive). 
  • Features are well defined when we have a description of what is now and a description of what is after what should be. The journey needs stepwise agreements. 
  • No matter how well you prescribe and describe, you are bound to learn things someone did not expect. It's better to discuss those regularly rather than at the end of the project with 'acceptance testing'. Let your testing include starting of those conversations. 

Thursday, December 19, 2024

Purposeful and realistic return on investment calculations

It's a time of the year that allows us all an excuse to message people we usually don't. I had a group like that in mind: managers of all of my tester colleagues at work. There's two reasons why I usually don't message them:

  1. We coexist in the same pool as peers, but we have no official connection 
  2. There is no list of them easily accessible
You can't create a connection with your peers without being intentional about it. Having now tracked through coverage of fellow customer contact people (200+ by now), I needed a new dimension to track. This is just one of them, although an important one. 

Yet still, you can't contact them if you don't know who they are. With large number of test professionals, asking around won't work. And the manual approach of tracking through organization chart is probably a lot of work, even when the data is available. 

For six months, I have accepted that such list does not exist. Call it holiday spirit or something, but that response was no longer acceptable to me. So I set out to explore what the answer is. 

Writing a script can be one of those infamous "10 minute tasks", but for purposes of discussing return on investment, let's say it takes 300 minutes. Maybe it is because originally I wanted to create a list of test professionals and automatically update it to our email list because I hate excluding new joiners, but since I am exploring, I never forgot what I set out to do, but I may have ended up with more than I had in mind at first. I take pride in my brand of discipline, which is optimizing for results and learning, not to a commitment made at a time when I knew the least -- anything past this moment in time. 

So, I invested 300 minutes to replace 299 minutes of manual work. Now with that investment, I am able to create my two lists (test professionals and their managers) investing 1 minute of attention on running the script (and entering 2FA response). How much time is this saving, really?

I could say I would run this script now monthly, because that is the cadence of people joining. Honestly though, quarterly updates are much more likely because of realities of the world. I will work with the optimistic schedule though, because 12x299 is so much more than 4x299. Purposeful return on investment calculations for the win! 

You may have done this too. You take the formula: 

You enter in your values and do the math (obvs. with a tool that creates you a perfect illustration to screenshot). 

And there you have it. 1096% return on investment, it was so worth it. 

The managers got their electronic Christmas card. I got their attention to congratulate them on being caretakers of brilliant group of professionals. I could sneak in whatever message I had in mind. Comparing cost and savings like this, purposeful investment calculations, its such a cheap trick. 

It does not really address the value. I may feel the value of 300 minutes today, but is there a real repeat value? It remains to be seen. 

That is not all that we purposefully do with our investment calculation stories. I also failed to introduce you to the three things I did not do. 

  1. Automating authentication. 2FA is mean. So I left it manual. Within this week we have been looking at 2FA on two Macs where automating it works on one and not on other. I was not ready to explore that rabbit hole right now. Without it, the script is manual even if it runs with 1 minute attended and 15 minutes unattended. 
  2. Automating triggering it with a pipeline. It's on my machine. It needs me to start it. While pushing the code to a repo would give access to others, telling them about it is another thing. And while it works on my machine, explaining how to make it work on their machine given the differences of machines and knowledge levels of the relevant people, this should really be just running on a cadence. 
  3. Maintainability. While this all makes sense to me today, I am not certain the future me will appreciate how I left things. No readme. Decent structure. Self-documenting choices of selectors, mostly. But a mental list of things I would do better if I was to leave this behind. I would be dishonest saying this is what I mean as work when I set out to leave automation behind. 
When you factor all of that in, there is probably a day or a week of work more. Week because I have no access to pipelines for these kinds of things, and no one can estimate how quickly I will navigate through the necessary relationship building to find a budget (because cloud costs money), gain access and the sort. 

The extra day brings my ROI to 380%. A week extra brings me to: 

Now that's a realistic return on investment calculation, even if it still misses the idea that this would could be not done, not repeated if it was not valuable. And the jury is still out on that.

This story is, most definitely, inspired by facing the balance of purposeful and realistic often for work. And I commit to realistic. Matter of fact, I am creating myself a better approach to realistic, because automation is great but false expectations just set us out to failure. 



Thursday, December 5, 2024

A Six Month Review

That time of the year when you do reflections is at hand. It's not the end of year yet, even if that is close. But a few relevant events in cadence of reflections culminated yesterday and today, leading me to writing for a seasoned tester's crystal ball. 

The three relevant events are all worth a small celebration: 

  • Yesterday was first day after trial period at CGI as Director specializing in testing services and AI in application testing. We are both still very happy with each other. I should say it more clearly - I love the work CGI built for me, and I love that I get to bring in more people to do that work with me also next year. CGI's value-based positioning as an organization that supports volunteering and great community of testing professionals (testers, developers, product owners and managers/directors throughout the organization showing up for testing) has been a treat. 
  • Today the TiVi ICT 100 Most Influential list was published just for Finnish Independence Day, and I found my name on the list for 6th year in a row. You can imagine there are more than 100 brilliant professionals influencing in the ICT in Finland, and Finland has had a good reputation of educated ICT professionals with internationally competitive prices meaning that for a small country, we have a lot of brilliant people in ICT. Representing testing on this list even with title 'Director', is a recognition for all the great people teaching each other continuously in the community. Testing belongs. 
  • Today we received positive news of one frame agreement bid I had been contributing to in my 6 months at CGI. Building a series of successes in continuing as a partner of choice with me around brings me joy. 
While these things are the triggers of reflection today, it is also a good time to take a look at themes of testing coming my way. 

Distributing Future Evenly

When searching for work with a purpose, I welcomed the opportunity to put together themes of relevance
  • Impact at Scale. I believe we need to find ways of moving many of our projects to better places for quality and productivity. Many organizations share similar problems and there has to be ways of creating common improvement roadmaps, stepping plans to get through the changes, and seeing software development (including testing) with increased successes. Scale means across organizations, but also over time when people change. 
  • Software of Relevance. I wanted to work on software that I feel connection with. I have been awarded multiple customers with purposes of relevance and feel grateful for the opportunities. 
Turns out this is a combination of conversations internally with people I work with, our current and potential customers, and the testers community at large. 

I have met over 150 peers at CGI in 1-on-1 settings (I tracked statistics for first four months and reacting 160, I decided counting something else would make sense). I have enjoyed showing up with our internal Testing Community of Practice in Finland, and started to create those internal connections globally. I have learned to love the networked communication expectation where hierarchy plays a small role. I've done monthly sessions for external testing community of practice Ohjelmistotestaus ry with broadcast of topics I work on. And I have been invited to conversations with many clients. 

AI in Application Testing

With testing as my backdrop living up to a reputation of 'walking testing dictionary', there is a big change into the future we have with increased abilities in automation with AI helping with quality and productivity.

I have looked at tens of tools with AI in them. For unit testing, for test automation programming, and for any and all tasks in the space of testing. From exploring these tools, I conclude a need of seeing through the vocabulary to make smart choices. With models as a service, it's what we build around those models that matter. The technical guardrails for filtering inputs and outputs. The logic of optimizing flows. The integrations that intertwine AI to where we are already working. 

In these 6 months, I created a course I teach with Tivia ry - 'Apply AI of Today on Your Testing'. Tivia sets up public classroom versions, and both they and CGI directly are happy to set up organization specific ones. 

The things that I personally find exciting in this space in the last six months are: 
  • Hosted models. Setting up possibility to have models on your own servers (ahem, personal computers too) so that I can let go of modeling what my data tell about me or reveals from my work. 
  • Open source progress. Be it Hercules for agents turning Gherkin to test results, or any of the many libraries in the Selenium ecosystem showing proofs of concepts on what integrations are like, those are invaluable. 
  • Customer Zero access. Having CGI be a product company allows us to apply things. Combine the motivation and means, and I have learned a lot. And yes, CGI has a lot of products. Including CGI NAVI, which is software development artifact generation product. 
Digital Legacy

The time focusing on scaling testing has warranted me revisiting my digital legacy. From being able to create AI-Maaret by having 20 years of my thinking on written formats, to making choices of how I would  increase the access to my materials for everyone's benefit has been a continuous theme. 

From 50 labels of courses in Software Testing, I brought things down to only 18. 


Should testing expertise be a capability you are building, we could help with that. We're currently prioritizing which of these we teach for CGI testing professionals next year. In addition to all the other training material we already have had without my digital legacy. 

Purpose-wise, I am now moving from having access to this myself to institutionalizing the access. And that means changing license for next generations of this from CC-BY to CC-BY-NC-SA, and you could always purchase licenses that allow other uses too. 

Collaboration and Sharing

With tentacles in the network towards all kinds of parties, I still facilitate collaboration. My personal vision is frame a lot of work with open sharing, this being my current favored quote.


The tide is something we create together. And we want to go far. Sharing openly is a platform for that collaboration so you see me sharing on Linkedin, Mastodon and now also Bluesky. My DMs on these are open. 

Networking

With open DMs and email (maaret.pyhajarvi (at) cgi . com), I am available for conversations. I have targets set on networking and should you have a thing we can discuss for seeking mutual benefits, don't be a stranger and get in touch. 

I am open to showing up in events, but I am still not proposing talks in CfPs. Please pull if I can help, and I help where I can. 

I show up for Tivia ry to take forward the agenda of Software in Finland. I show up for Ohjelmistotestaus ry to build a cross-company platform on testing. I show up for Mimmit Koodaa in any way I can, because that program is the best thing we've had for many many years. And I show up for Selenium, as volunteer in the Selenium Project Leadership Committee. I show up for FroGSConf open space as participant because it is brilliant. I show up for private benchmarking, currently with three groups of seasoned, brilliant professionals driving a global change in testing through supporting one another. 

As new thing, I have discovered "Johtoryhmä", women in ICT leadership in Finland. Will show up for that too. 

And when we meet, please talk to me. I am still socially awkward extrovert who imagines people don't  automatically want to talk to me. You talking to me helps me. And our conversations may help us both. 

Saturday, November 30, 2024

You are enough

Sometimes there is just too much going on. Too many self-volunteered tasks and deadlines, some more visible than others. And you find yourself doing a lot. Feeling in control though, you chuck through the pile recognizing there is no progress if you juggle the load, filled with anxiety. Finding that sense of agency, you do what is in your power to do. That sense of power, it is essential. 

I come to think of this for multiple triggers in this space in the last weeks: 

  • Firehosing information
  • Exercising replan
  • Managing anxiety in others
With the four stories to share, I ground myself to the purpose I have blog in the first place. Not to write the perfect articles. But to write down things to reflect. Usefulness of my reflections to others remains with the  discretion of the reader. It's different to what majority of bloggers do, but then again, not everyone frames their blog as 'seasoned tester's crystal ball'. 

Firehosing information

I know a thing or two. I learn more by building a foundation of what I know by sharing it, and have built a bit of an identity in reflection and proving myself wrong. I take pride in the 180 turns of opinions I can recognize on my work. The attitude towards whether continuous integration is a good idea. The idea of what test cases should look like. The idea about separation of concerns to developers and testers. The attitude towards automation. I have written evidence that I learned and changed views. It used to scare me enough to not say things of today. But I learned that I am enough today, even when I am not enough as today for tomorrow. 

I have needed to tap into this lesson a lot now that I have a lot of new colleagues, with a lot of those learnings I have needed to lead myself through over the years. I have needed to remember that I did not change my perspective because I was told what was right. I changed my perspective because I observed myself the options, and had agency in making those changes to my internal model of the world. 

I have a lot to say. And I moderate on how much of it I say. I have decided now that I say one piece from stage a month with audience of my colleagues and anyone in the Finnish community, through the platform that Software Testing Finland (Ohjelmistotestaus ry) offers. And I hold space for conversations with my colleagues on leading and managing testing twice a month. Once a day I can post on our internal channel to share something. Once a day I can post on LinkedIn to share something. And on Mastodon / Bluesky, I can say what I want to say when I want to say it. 

I write and say more in a week than others can consume. Sharing is an outlet, and a processing method. 

November was a month of firehosing. I did seven new talks. One because of my choices of cadence, but 6 others because someone pulled and asked for them. Doing my tally of stage presence a month early was an act of offloading. 

It's been hard to remember that the one guy giving me feedback last year everything I ever say is shit, and the other guy giving me feedback this year that "quantity is not quality", just in case I did not already deal with enough negative internal self-talk, they aren't the full picture. I am enough. You are enough. It is true for us all, without comparison to others. 

Exercising replan

While my head needs an outlet of information through sharing, the visible parts of what I do aren't all I do. Life has been a lot, work has been a lot. 

I found myself in a situation where I had so many competing tasks to complete that I couldn't. 

I couldn't deliver a report to a customer that I promised. So I told the customer, and took a week of extra time. Turns out the appreciated the confession of a consultant overestimating the pace in which they can analyze a complex situation. 

I could not find time to talk to half a dozen people about test automation I wanted to. I didn't, and while they may not forgive me, I forgave myself by letting them know what I learned while I could not show up for them. 

I have still one more thing on replanning, and I am balancing the cost of replanning on that one. Me doing it might just be easier than me replanning it for someone else, we've already been through two bounces back to me. 

Ending up with too much requires replan. Requires confessing need of help, need of time and space, and support. Remembering I am enough even when I am not enough for all the things I may end up with. 

Managing anxiety in others

Being a leader is about having people who follow you, sometimes with positional power but sometimes just because they were in search of someone with ideas. I still call myself a regretful manager, because I don't want to manage; I don't want to lead. I would much prefer if we shared the leadership and the doing, and found ourselves negotiating our journey together.

I have bubbles that recharge me where the world is peer to peer. Our group of regretful managers. My monthly benchmarks on work with a peer, now running for multiple years. I love these groups. 

But I also have other groups where I show up to hold space. Even in volunteering side, I find I volunteer to manage. Set context for decisions, while trying to stick to my own boundaries of what I can take on. And making space with spoons to ease anxiety of others. Reminding them they are enough, because while I can tell that to myself and believe it, some people need to hear it from me. 

With these stories, I would remind you: you are enough. You have agency. You have outlets - writing, talking from stage, talking to people. And just because it's not easy or even possible, it's not you. 

You are enough.