Sunday, December 21, 2014

Chess and Testing, Strategy and Tactics

I spent an evening reading Rikard Edgen's et al. book (in Swedish) about Test Strategy and I keep thinking, that we're confusing and mixing strategy and tactics. From a small note in the book referring to strategy of chess, I wondered into Wikipedia to read a little more about chess and picked up a few realisations.
Tactics are move by move, whereas strategy is about setting goals and long term plans for future play. 
A lot of the Edgren et. al book is about tactics - a huge collection of potential tactics. Knowing tactics is very relevant for the potential success in the game of testing. If your selection of tactics is very limited, your testing is limited. But tactics are not strategy.
Strategic goals are achieved by the means of tactics while tactical opportunities are based on the previous strategy of play. 
The article about strategies of chess did not describe strategies very successfully. And it was the same with examples of test strategies. Looking at them, they appear as lists of selected tactics, founded in an analysis of the product, most often not documenting the why part of the selection. Strategies and tactics are intertwined. 

I particularly liked the quote from Wikipedia article:
"Strategy requires thought; tactics require observation" -- Max Euwe
Observing what happens to your testing after a move you make in testing is not an easy skill. But building further into strategy is even more difficult. For each tactic, it's not just the what and how of the individual idea that guides testing - it is also the skills/knowledge for the person performing testing based on that idea for successful completion with the right results. Observation is needed in testing for both the information we're supposed to deliver and for the choice of next tactic to apply.

And the Wikipedia article on chess strategy offered yet another piece of advice.
In chess, one needs to become 'master' before strategy knowledge becomes a determining factor in game outcome over tactics. Many chess coaches emphasise study of tactics as the most efficient way to improve one's results. 
This leaves me thinking that it would appear to be also this way in testing. Perhaps we should focus on outlining and teaching tactics instead of strategy. Perhaps we already are, at least with Edgren et al. book.

Building skills for strategy is building awareness of tactics, so that you end up with choices to make that you need to think about in longer term. After every tactic move, you're better equipped to make also strategic conclusions about overall moves if your ability to observe is in place. Strategy lives, and with testing, there are perhaps quite few strategic choices that are set in stone so that changing direction with later learning would not be possible.

Chess has a defined end, testing does not. Thus in testing, you need to actively think about the choices - what comes first, what gets left out? And if you find something extremely relevant very late in the project, it could still get included.

If test strategy is "the ideas that guide test design", isn't test tactic "an idea that guides test design"?




Saturday, December 20, 2014

Temporal aspects to a test strategy as the next idea to guide testing

As I started working on identifying an experience report I would deliver for #DEWT5 in January on Test Strategy, I hit a wall of confusion that I have not quite overcome yet. What is test strategy anyway - for me, in practice? And which experience related to it would I want to share?

Test Strategy is the ideas that guide test design.

At first I was looking at my two more recent products I work with and I made a few observations:

  • On one product, I owned strategy and another tester owned testing. On the other, I owned both strategy and testing. I'm more sloppy on communicating my ideas on the latter. 
  • On one product, there is never enough testing but it's completely deadline-driven to do best work with what is available. On the other product, schedule is flexible to include the right testing before releasing.
  • For both products, there's nothing about releasing that would say that testing stops there. With continuous flow of releases, we react to feedback outside the time for the feature.
  • There is no test strategy document. There is nothing I could clearly call test strategy. Even the ideas of how to test are more of generic guidelines than a project-specific strategy.

Looking at the two products, I came to realise that the way we work on both of these products is continuous flow of changes into an overall vision of a product and having a strategy other than generic ideas of "provide relevant information" and "do the most important thing next" seem off. I would not call the checklists we create to model the product as strategy - they help think about scope of testing though. I would not call the Jira tasks that outline testing time boxes strategy, but they were a way of discussing strategic choices of where to use time, what to do in a shallow and what in a deep manner. But as skills grew, we have up even those tasks and just work on the development tasks without plans of any kinds - a day at a time. 

In relation to the changes going into the build to be released, I can well outline the work we have done and what we should do. I notice primarily choosing what we'll test and how by the set of skills available - I have developers test what they will manage (with a stretch), I'll have product managers test what they will have patience for and I'll test the stuff that is hard to give to people who are less skilled in the types of testing I do. 

It seems to me that my test strategy is just the next idea of an experiment to do to provide information by testing. I try to choose the one that I believe will at least be of value, with the principle that time will anyway run out. 

Looking back at a test plan I wrote at my previous place of work though, I've clearly seen strategic thinking as in identifying what high-level areas and ideas to guide those areas as very important. But there someone else owned the strategy and testing, and I would just suffer from the results of poor strategic thinking that would drive focus on a too narrow set of things. 

So this left me wondering: if test strategy is the next idea to guide testing and builds an idea at a time, the goodness of the next ideas relies on who is thinking up the ideas. Introducing more versatile ideas as strategy without implementing the ideas as testing could be a good approach then. In particular, transforming people who have one idea and then are out of ideas of what aspects to test for. But what am I missing if I just don't build anything more of strategy-like as I do now in my projects? 

Could test strategy be the ideas that have guided test design, built one idea at a time - the next? Playing with the temporal dimension of strategy seems at least an intriguing idea. 

Friday, December 19, 2014

Brilliant Tester Looking for Her Next Challenge to Take

Today was the last day for Alexandra Casapu (@coveredincloth) from Altom to contribute at Granlund (that's where I work as test manager / test specialist). I've known she would be leaving us for a few months, but no amount of preparation really is enough when you have to let someone as great as she is to explore into new challenges.

Alexandra started over two years with Granlund Manager -product and I clearly remember thinking many times about Altom calling her a junior tester. If she with her skills and drive for learning is a junior, I wonder what a senior looks like. Junior or not, I've tremendously enjoyed watching Alexandra grow as tester, reach out for new things and become important without making herself indispensable.

There's a few things I would particularly want to emphasise.

The last months, Alexandra has worked hard to transfer knowledge without creating test cases. Her contribution throughout the two years has been fully exploratory. I appreciated her mentioning today that she felt encouragement for autonomy but also support from me, and she really flourished with the autonomy smart people should always have. Her notes and explanations of what she has learned that could speed up the new people's learning and not remove all the knowledge she has built have been very impressive. We at Granlund have failed to assign the developer to be retrained as tester on time, so she has had to focus on structures. And luckily, while she stops as tester today, she will coach the developer in training for half of January's working days.

The issues she finds are to the point, clearly reported and well researched. And there is many of them. In the last weeks, I've needed to address the risks that us not replacing her with another exploratory tester will leave us with: 100 bugs every month that we have fixed, and are now unlikely to find until the developer has been retrained. And there's a long way to go with that. The product managers have learned to rely on her thoroughness and consideration in testing the features, and will have un unbearable workload without her (or likes or her). But we chose to try first the developer retraining for a new career before going back to Altom for another great exploratory tester when the production has issues in scales we've avoided with her and developers are firefighting instead of focusing on the new features they've promised.

She has worked in particularly challenging settings, still providing excellent results. My team she has worked with speaks Finnish. Writes requirements, emails, and Jira descriptions in Finnish - a language Alexandra does not speak. And she does not only understand (because she works hard on overcoming all barriers) but asks insightful questions those who can read the documentation in their native language don't get to ask. She has infiltrated herself with a team of developers, who don't offer information with weekly meeting practices and skype/flowdock discussions - and a local agent in me who voices out some of the silent information. This team's communication isn't excellent locally, and yet she manages to find out ways to patiently gather the information she needs.

The team's developers have told that her joining testing of certain areas is time wasted into long periods of learning, and she has shown them how true exploratory testers do things: learn a little, provide valuable information soon and deepen your learning to become a product / feature expert. She has surprised everyone with that.

And she has also significantly contributed to our Selenium test efforts. First with ideas. Then with tests that were not quite right for maintenance, but she learned. And eventually, with tests that run on par with any of the developers contributions. She is persistent, takes any learning challenge and drives through with admirable focus.

We would not want her to leave, but we also recognise and admire the reason she is moving forward: to learn about different things that will make her even better tester. As far as I've understood, she is now looking for a project where she could work face-to-face with the team instead of remoteness. So, should you be in Romania or should you want to hire a brilliant tester to work locally outside Romania in your project from Altom, I would strongly advice looking into giving her a challenge she needs. Ru Cindrea as the account manager we've been in touch with would be happy to talk about those opportunities from an Altom business point of view.

Funny enough, the title of this blog could apply to me as well. I'd be looking into moving to US, specifically California. Meanwhile, I'll just work partly to Finland from California, as I will be leaving there in just a little over a week. 

Thursday, December 18, 2014

Focus on some, be blind to some - need faster learning

I'm trying to think why true incremental development and co-designing features seems to be so hard. The reason I think of this is that just when I thought we managed to do something better, the empirical evidence suggests that we failed. Now the question is if we will learn of this.

We did earlier several features so that someone (our project manager) created a specification in collaboration with business representatives, turning their needs basically into paper prototype of user interface sketches. The developers would look at that, ask what ever and draw their conclusions on what to implement. I would look at that as a tester, see different things and when seeing the features in action, notice things that wouldn't work - either  because the user need and the design were out of synch or because the design was insufficient in relation to product context or because there were still minor issues developers had not noticed on the implementation. It was awful: at worst, we would do three rounds of redesign, as the feedback I would provide would trigger very good discussions with the business representatives, and we would learn that what we had specified was nowhere in the neighborhood of what we would actually need.

To make things less awful, we changed so that we all sat together to do the designs, discuss the needs and guiding ideas for the latest feature. As we discussed, the designs changed - almost completely. That is positive, we are much closer to what we would need with the collaboration. But as discussions tend to go, the vocal people tend to get too much attention. If we would note problems we had previously of not understanding availability of functionalities in different states of the program, it would hog our attention. If we would talk about the labels and button locations on the user interface (like the business people wanted, it would hog our attention. So with all the discussion, in retrospect, we lost focus.

There's major concepts within our program that guide all functionalities. They are like a frame that never goes away. We failed to talk about those. In retrospect, it was obvious to me. It was one of the things where we always fail, seeing features in relation to other features, especially system features. And yet, now that I'm testing the feature, it's obvious that we failed to deliver working software from a very central angle. There's a whole bunch of small fixes we don't need to do now, but adjusting things on the level of basic concepts might be quite much harder.

There's really one big mistake we keep running into over and over again. Not having all the information at once is not the mistake. Not being able to pass information we might in retrospect think we had is not the mistake. Our mistake is that we build things in too big chunks, and accept delayed feedback.

With two days before Christmas vacation and less than a week of work effort before a major demo, it's not exactly a fun thing to tell people that we added something that appears to work to the extent we need to demo it, but the old stuff we had is quite much broken. And that the new stuff only works in simple settings, if placed in realistic production scenarios, it fails in very relevant ways.

We have a nice demo user interface with nice demo functionality. But is that what the system is about - hardly. We need to learn new ways of working together and learning together. Perhaps 2015 with mob programming could take us closer. A new years wish, perhaps?

Wednesday, December 10, 2014

How attitudes towards testing have changed in 20 years

I have very soon - in about 6 months - 20 years in testing and software behind me. And it all feels like yesterday. I've learned a lot, and have so much more to learn. I love the work I do.

With this idea in my head, I was checking through twitter and all the retweets about my previous post that I explained in my tweet as "developers not studying skilled testing and telling that testers are not needed", a realization hit me. From the summary of attitudes I'm now facing with agile wanting to make my profession extinct, this is not at all different from what I was struggling with 20 years back. And yet its all different.

Back in 1995, very few people would even recognise testing was a profession. There was no professional networks in Finland, most testing was still done by developers. And where testers existed, they would be deemed less valuable, less appreciated. In particular, developers would insist on not needing the testing testers were doing, that the end users feedback was "realistic" where as testers were not. And I remember many, many developers telling how testers would not be needed, if only developers did a proper job at developing - like usually the ones thought they were that kept telling this.

The attitudes on this level were very similar, but there were two differences that I find notable.

There was less of a culture and community of testers, which has proved to be invaluable in building skills that has stopped most of developers I work with from talking shit about my contributions. Immersed in the culture of testing that testers co-create, a lot of tacit learning is passed, and with practice, that learning builds my skills of delivering just the right information. It also is a great support network if I ever feel upset or down, there's always someone listening, helping, offering constructive advice on how to cope and turn things better. Constructive and testers? Yes. Testers help the other testers, just as they are there to help developers and business people. Testers have a community of support.

The other difference is that developers have found testing of their own they did not recognise 20 years ago. It is not the same testing testers talk about, but they tend to use the same term as they still - as they did not then - study skilled testing. There's a whole smart culture of unit test automation, that James Bach and Michael Bolton justifiably choose to call checking. When there's a lot of details to monitor and keep together with short cycles and fast pace, the things we know of and keeping them intact has built a developer testing culture that makes the idea of developers developing to professional quality much more likely.

After 20 years, developers have found checking as means to enable quick changes, and expect a little less contribution on reporting the simplest errors over and over again from end users. Testers are stronger, together. But we still have not managed to get to a point where we could appreciate the non-programming people's contributions to creating software so that we would look in more detail what happens there. 

Monday, December 8, 2014

Would you please stop putting me in a box where I don't belong?

I'm a tester. I love being a tester. But I hate people telling me what I do since I'm a tester.

I know I do heavy job crafting (article on that for Rosie Sherry's online publishing pending), meaning that I interpret my role in ways I find suitable for me and my organisation, so that some of the things I end up doing bear no resemblance to my job description. But the core of it stays: I evaluate product by learning about it through exploration and experimentation (Bach&Bolton definition of testing recently).

There's a whole bunch of critical thinking and hands-on empirical evidence gathering skills I apply. There's a wide pool of business-oriented knowledge that I've acquired over the years that help me see things that many developers around me are blind to. And several product owners over the years have personally thanked me for valuable contributions to the success of their products with the mix of knowledge and skills I provide.

As a tester, I've earned one employee of mine half a million of euros by finding ways to release sooner rather than later - exploring risks in layers rather than following a plan of what we would need to test.

As a tester, I've enabled creation of a server product that was supposed to take man-years to create in less than a man-month by putting the ideas together in a completely different way that was good enough for that time and purpose.

As a tester, I've saved numerous hours for my current product manager, who personally thanked me on helping him see that the software could actually work and fulfill more of his business needs than he had been told by the team of developers. With the hours he saved, he worked on other things that drove our business forward in relevant ways - opportunity cost matters.

As a tester, I've suffered from reimplementing the same feature three times before it hit the mark and helped my team to hit the mark with one main iteration of the feature.

I feel very upset for seeing tweets like this:

I wonder where these unicorn developers and product owners are, who don't need / appreciate help from someone who identifies herself as a tester, since whoever I've worked with, do appreciate it. There's programmers who do their jobs so that testers find nothing *they deem relevant* since it's all just new requirements - stuff that the product owners *should* identify (but don't without helping hand).

I've spent a lot of time learning to be a great tester. I'm really, really good at what I do. However, on the work I do, there's much more resemblance to William Gill's list of the things Complete Product Owners would need to be aware of than to the simplistic ideas where all testing is placed in the development sandbox and automated. I'm more likely to be a product owner than a developer, but as a tester, I'm a very useful mix of those two worlds.

Like I just wrote for the talk I suggested for Scan Agile:
Testers don’t break your code, they break your *illusions* about the code (James Bach). Illusions may be about the code doing what it’s supposed to; about the product doing what it would need to; about your process is able to deliver with change in mind; and about the business growing with uninformed risks on the product and the business model around it.

I should add the illusion of the perfect developer and the complete product owner who don't need help on the list of illusions. I know great developers and great product owners, who can appreciate having an in-depth empirical analysis done by a tester so that together we create more value faster.  They are hardly perfect - I'm not perfect. But together we're pretty damn good!

I also need to comment on this:
I test different things in the same project differently - context matters. I don't create "abuser stories", except on rare occasions. There's plenty of work for testing without focusing on just the negative. I help create value more efficiently. So, please developers, stop putting me and the likes of me in a box just because you haven't met or observed in detail real skilled testers in your projects. There's so much the agile community should learn from the context-driven testing community. Trying to build bridges is tiring, when you need to all the time hear from people that regardless of your continuous contributions, there's a theory (based on bad research I might add) that tells that you will not be needed.

Skilled testers breed from a culture of testing. Agile is doing pretty good job trying to kill that culture out of ignorance.


Saturday, December 6, 2014

Skills and habits

Within the context-driven testing community, we talk a lot about skilled testing. Skilled testing is a great replacement for manual testing - a phrase that should really be banned as testing is done with brains and has very little resemblance to manual work.

We talk about a great number of skills. Exploratory Testing Dynamics cheat sheet by James Bach sums up nicely some of them. Critical thinking is core to it all. We need to be able to model and think of the system and it's context of use in many dimensions, observe, work with test ideas and design experiments, report and describe the work to evaluate the product by learning about it through exploration and experimentation.

I love the message James and Michael deliver on the list of skills they've identified: each of them is teachable. Each of them can be learned, and learned more deeply.

Skilled testing - and the skills of testing - have been my focus for a long time. We have a Finnish non-profit founded this year on this very theme: testing worth appreciating, as it requires skills that don't exist everywhere. Skill allows us to do deep testing (as opposed to shallow testing) and surface threats to value and awareness about risks.

A week ago, a consultant friend took an afternoon to spend with my team at work. Something he said after that session stuck with me with relevance to all the thinking about skills we work on. He mentioned that during our session of group programming, there were examples of skills that we have but habits that we were missing, and that we need to work with those skills more to build habits. The particular skill example Llewellyn used was checking code in for the version control without checking all the details of what we had just changed, something every one of us can clearly do but with that example, we clearly were not doing enough of it to make it a habit that would not scare us and require focus.

My colleague in test, Alexandra Casapu, did a talk about testing skills and she pointed out that skills atrophy. When unexercised, things you used to be able to do go away. This is very much related to choices on the habits choose to have.

I find this a good thing to remember. It's not just skills we're acquiring, but we're also turning those skills into habits so that we can effectively use them. Without regular practice, the habits won't get built. Some skills deserve to be left to side and let atrophy. Some habits we've built should perhaps be allowed to wither away sooner than later - unlearning is also needed.

Never a dull day when learning more. The choices of where to focus one's time just seem hard - all the time.

Monday, December 1, 2014

Getting great speakers for a conference

Over the years, I've done my fair share on organising conferences. Some of the conferences we call seminars in Finland, but that is just a name one-track conference. I've been the chairman, participated in quite many program committees and content advisory boards and organised smaller sessions to learn from all the great people I feel I want to learn. I'm not done, far from it. But at this point, I feel I have something to say from experience of organising conferences on how to get great speakers.

Let me first summarise my lessons:

  1. Many of the best talks (to me) are case studies / experience reports that consultants cannot deliver, and practitioners have less incentives to propose their talk into conferences. Consultants and wannabe-consultants are more likely to reply to CFPs. There's less women in traveling consultants, so if you want more women, you need to get out of the consultant sector too. 
  2. Invited talks can be targeted for inviting anyone, not just the same-old-faces that did well last time. If you walk around and talk to people, you will find great stories that deserve to be told. Some people think great conferences are about great names though, and great names usually need to be invited as they don't need to reply CFP to fill their calendars.  
  3. People in general appear to like being invited (recognised), been given feedback to build their talk in collaboration and not wasting their effort into suggesting talks that end up discarded. 
  4. Mix of CFP as in Call for PROPOSALS (starts a discussion, isn't yet a fully formed presentation) and invitations would be my recipe of choice to build great conference. 
  5. Paying for travel for speakers is a good practice. Paying for speaking would be a good practice too.  Organising work on the side in the surrounding community is a good practice too. Worst practice is not to tell if you pay for any of it at point of sending out a CFP. 

CFPs, inviting to participate on CFP and the uncertainty

A lot of conferences send out a call for proposals / papers / presentations - a CFP. A CFP is an open invitation for anyone to suggest what they would talk about. In organising a conference, we often seek to share the CFP for various groups, even encourage individuals to respond to the CFP without commitment of accepting their talk. Sometimes we just post it in the traditional channels and see what comes out.

Most of replies to CFPs are people who are selling something. They are usually selling their tool or their time as consultant / trainer. And then there's some people who just really want to share the good stuff they've done that others could learn from.

Many of CFPs are calls for presentations, meaning that your talk is supposed to be ready when you're submitting it. There's a significant amount of work to get a talk to that point and many (good) people get rejected for not having done enough prior to submitting. Some CFPs are calls for proposals, ideas of what a particular person could talk about, and with those the process of becoming accepted tends to include a lot more discussion and collaboration. You would usually be asked for a phone number and expect people to call you to talk about your idea(s). The distance from saying you might want to to talk to having a print-ready description can be significant. This form of asking for proposals is more on the side of asking who would have and which stories, hoping people would volunteer the information on themselves or their friends. It's also a lot more work for whoever is organising the conference talks.

There's very few CFP's that I've responded to, while there's a lot of conferences both in Finland and abroad that I've done a talk in. The longer I'm around, the less likely I seem to be to respond to a CFP.  It's not that I would not want to talk in the conference. I just don't want to prepare a unique talk with a lot of effort into it (and I need to do this before submitting) without possibility to discuss with the organisers on their expectations either lowering my effort as my topic isn't interesting in relation to other suggestions or increasing my likelihood of the effort being used for value - delivering the talk.

Recently I've responded to a few CFP's as I'm turning into a consultant again. One because there was a bet with a colleague - that I lost, happily. And two others because I wanted to get in touch with people in that particular geographic area thinking about future work opportunities. One because a friend wanted to co-talk. And there's one CFP that I've responded to without realising it was a CFP and not an invitation - a conference I will not contribute to again. Being clear on the uncertainty of speaking slot is a good practice.

You can also invite people to participate on CFP and that alone works a lot better than just sending out a CFP hoping people would catch it. You might have to ask many times, and at least I feel a lot of personal pressure on the fact that no matter how much I emphasise that I can't guarantee the selection as there's a different group doing them, I feel bad when people I've personally appealed to submit will not end up accepted.

There's a limited number of speaking slots anyway. We just look to fill them with best possible contents. Best possible may have many criteria defining it. Good value for listeners may not require a public open call for presentations at all. Like most commercial conferences don't, they just rely on groups of people giving advice on who to invite.

Inviting people is caring, do your homework

I've been invited to an advisory board of a Finnish conference every year since 2001 - that is quite many conferences. That particular conference is commercial, very popular and has had great contents built in collaboration with the commercial organiser asking from candidate participants what they feel they would like to hear about and asking professionals like myself what people should hear about, and putting those two together in a balanced mix. I take special pride in going to these meetings with a list of people who have never spoken before, with a variety of topics and knowledge of who could deliver what in an up-to-date manner.

To be able to do that, I sit in bars after Agile Finland / Software Testing Finland meetups and talk to people about stuff they are into. I make notes of who the people are and what I would want to hear more of. I'm always scouting. I use scouting for great presentations as an icebreaker discussion topic, asking what would be the thing you should talk about, helping people to discover what it would be. At first I did that to hear the stories told myself, nowadays I do it also just for the fun of it. It's a form of call for presentations, with a personal touch. And it works brilliantly.

I feel some of the comments in twitter about getting speakers are assuming that inviting means you invite people who have talked before. I invite people I've never seen do public talks based on how they speak in a bar. If content is good, I can help them fine-tune their content and deliver better. I've helped many people, and would volunteer for that again and again. That's how we get the best stories.

People I scout for are usually people without the consultant incentive to talk. They like being recognised for their great stories and experiences - they deserve to be recognised. And when invited, they work hard on doing well with their presentations.

Compensation issues

The last bit I wanted to share is that a lot of conferences still fail to make it clear on what is their principle on compensation. I'm sure you can get great local speakers, even some consultant without paying their travel and accommodation. Local speakers might be just what you need for your conference to be great - local pride of accomplishments. But if you are seeking to get people who might travel to come to your conference, it would be a good practice to pay for the travel + stay and state that in advance without a separate question on that.

I also believe we should start paying the speakers for their time in delivering presentations. Some communities pay for time directly as speaking fee, some pay by organising a commercial training on the side of the conference. The latter is very hard to do for many people for the same timeframe. Some conferences are built to have paid workshops on the side and allowing a workshop on the side significantly sweetens the pot for the presenter.

Time in a conference is time away from other paid work. There needs to be something in it. Marketing yourself could be it. Traveling to new places on someone else's expense could be it. Meeting people in new communities could be it. Or it could just be another form of work you do, if you would choose to set up conferences like commercial organisations you anyway compete with - sometimes unfairly, lowering prices by avoiding the speaker payments.

For example, why would I want to pay to speak at e.g. EuroSTAR? I have little interest to do that. But a usual track session there does not cover my travel and most definitely does not cover the missed income from time away from work. Being big and important means there's many suggestions in numbers, but quality might not be good, with way too many consultants / tools salesmen with a sales agenda. There's real gems in the masses, really great consultant / tools people talks too. But the ones that will not be listed could - I claim would - be even better. I base my opinions on being in program committee in 2013.

Summary

Not all conferences are the same. It helps if you think through the slots of the conference you're organising and create visibility to your expectations before CFP or invites. There's a lot of people who will volunteer to speak either by responding to CFP or by responding to an invitation. You'll never fit them all. You need to choose somehow. Choose by knowing the people, personally talking to them about the depth of their experience. Choose ones that excite you. Listen to a video of them talking. Ask around on experiences of them talking. Take risks on some of the slots if it fits your conference profile.

If you want gender diversity, budget the speaking slots for gender diversity and be prepared to create a balance of CFP responses and direct invites. If you seek for cognitive variety, you again would need a mix of CFP responses and well-researched invites. Only people who feel they belong to your conferences community will respond to a CFP. If you want cognitive variety, you would have to reach out of the usual suspects circle, and only invites would do that.

There's good in CFPs. They are a great way of announcing people with topics where people want to present so much that they are willing to do the work without knowing they get the value in delivery. The value for them may be to learn how to frame a talk so that it gets accepted. Or they may be fishing with the same talk in various conferences. Maybe they want to be at your conference and free entry is what they're after. Personally, I would not want to do the same talk for two major conferences. But that is probably just me. But at least you know that people submitting when asked would want to be there.