Tuesday, January 19, 2016

Crowdsourcing and what to think of it

About three years ago as I had just joined my current projects, we needed more testing to happen than I could do. The developers test, they had tested before but with the level of skills on testing (and interest) the end result was that the main outcome of their testing back then was increased coffee consumption in avoidance of doing the work.

Back then, I identified two options we had:
  1. We could use Crowdsourcing - pay a fee for service that allows us to use random testers to use the product to find problems.
  2. We could use Contracting - pay a fee for having one tester dedicated to grow with us through testing for us. 
Comparing back then the prices of uTest and the Romanian contractor we considered, the differences on the monthly fee were not relevant. So the question really went down to contents. We decided at that point to be better off with contracting, and from that decision, coaching to have another deep tester in the world commenced.

I come back to thinking of this, as last week I decided to try testing with Testlio. As far as I can tell, they are another option to uTest back then (Applause nowadays) on crowdsourcing, with the specific aspect of not paying for results but for assigned hours.

I'm just about to start my third round of "let's see what this is about" with Testlio. At this point I don't know what the payment by hour is - from their pages I can tell that they pay less than my day job even in the highest category, but I guess that comes with seniority. But from Testlio perspective, I'm a newbie, so I doubt they will place me in the higher rates without specific Testlio-experience - one that I'm unlikely to build up.

From a professional tester's viewpoint, here's what happens with Testlio:
  • They invite me to projects and assign me work packages 
  • Timeframes for work package availability are short and I can say if I volunteer 2 hours of my time today or not before starting
  • As I start a work package, I start a timer. You're paid on the time. Forgetting to start a timer (newbie mistakes) can be corrected by discussing with the test leads. 
  • I get a high level checklist (find this feature and mark pass/fail) or a charter (spend an hour doing X).
  • I test what I can on the time given, and report. Reporting includes adding notes on a very high level plan and bug reports. 
Since I volunteer for this as tester, I probably try to do my best in the time allocated. Even as user, I am likely to do a decent job if nothing special is expected of me. Buggy software (like work package 2) makes skilled tester like me even less necessary. For a tester like me, it ended up being hard to stop, so free work gets done. I even dream of things I could have still checked.

Testlio approach is better with the hourly payment than the other options back in the days I looked at them for a tester perspective. If you get paid only when you find valid bugs first, it adds another level of shallowness and leaves out things that might get reported in the hourly model.

I like looking at how I feel while doing this.
  • Refreshed. Testing something different is refreshing. 
  • Unchallenged. Short timeframes invite shallowness. Even a few consecutive work packages on the same test target with different versions still leaves the feeling of shallowness. Within 2-3 hours, you won't investigate anything complicated.
  • Unresponsible. My job is done with the clock. I do the best I can within that timeframe. I have no connection with the product. I have no social relations to others - at least not paid social relations.
I currently have three main concerns:
  1. The pay & the progress. I expect that as soon as I learn how little I was paid for these, I rather contribute time on open source projects. 
  2. The bureaucracy. I already got a request to follow the issue template. I hated the issue template and felt constrained. It had repetitive information to be filled in. It had unnecessary information for most bugs I logged. Seriously, even copypasting my service provider info when there's simple UI bugs that have NOTHING to do with the network makes me feel like a rubber stamp. I would choose to use my time better while I understand that requiring same info comes from lessons of missing that info on some bugs sometimes. 
  3. The shallowness in the model. It's great that some companies use this to find bugs. But I find the idea that they use this as replacement of inhouse testers, thinking they are getting the same thing inherently wrong. The timeframes here are too short for proper testing, the testing in 2 hours remains shallow.  I feel the whole crowdsourcing model drives intelligent deep testing down by implying that a short time is enough. It's not about just having skill in testing, but learning to dig deeper takes time. 
The two first concerns will soon drive me to prioritize other things to do with my time. The third concern is bigger: I don't need to be a part of this change but the change has been here for a while already. What should be my stance on it?


1 comment:

  1. Hi Maaret, great post and I felt I had something to say!

    I've recently been looking into a similar company that provided the same service and I had similar concerns:

    1. My first thought is what value are these services providing that I couldn't find by asking friends or family to perform the tests (even if I just had to offer a small amount of money). It seems like the only value they offer is supposedly less overhead in organising the testing effort - but then:
    2. Given the very constrained set up both in terms of limiting the time testers get as well as how they can convey their information - surely the overhead in analysing the results is much more than normal? If I organised my own tests, I get to observe the users and adapt how I receive their information.
    3. Reading some of the reviews of companies that have used these services to crowd-source testing, there seems to be a feeling that some of these crowd-sourced testers are potentially professional testers. This means you may not even gain objective, user-like feedback.

    I'm struggling to see the cost-benefit of crowd-sourcing testing even just for general usability testing over simply organising your own.

    I'm very curious to read/hear about some success stories with crowd-sourcing from the testing community!

    ReplyDelete