Wednesday, September 5, 2012

Leaving bugs is better for contractor business?

There's an ongoing discussion in Finland, in Finnish, about customer - contractor relations and how that relates to testing. Michael Bolton was curious about a blog post I did in Finnish, but instead of just translating it, it try to tell you the story that comprises of that blog post, an interview in a Finnish IT-magazine and another Finnish blog post that started the discussion.

Within the testing field in Finland for quite some time, customer - contractor -relationship has been somewhat of a hard topic. It has a lot to do with the testing community, since we're a community that shares experiences between companies, and we have a clearly two groups of overall needs in education: needs of those who create software products and needs of those who buy all development from outside / sell development work to the buyers' needs. It has seemed, for quite some time, that the customer - contractor testers are unhappy and powerless as per how they describe their challenges, and the product testers quite empowered.

To start the discussion, there was a post in a Finnish blog that was anonymized, where a tester told a story of some project happening in the public sector. Main points were:
  • Customer organization decided to use a testing company to help with acceptance of some SAP-based system
  • Testing company found a lot of bugs, and project steering group (both contractor and customer) kicked out the testing company after 6 months.
  • While testing company was present in the project, the other contractors (ones developing) were making an effort in fighting that the bugs were not bugs as per what had been agreed with the customer saying the testers could not test, tested against the process, tested the wrong things. There was shouting from the software contractors part, and refusal to fix issues on a fixed price
  • Assumption from the project was, that development contractors had gotten used to billing bug fixes separately, making their profit there, in continued development billed separately.
As continuation of the story, I got a phone call from an IT magazine, being one of the local testing experts, asking about this story and my comments on it. The article was called "Buggy system is a money-making machine for the contractor". The main points in the article were:
  • Testing people interviewed find that stories like this are typical. Getting bugs fixed on a fixed-price project takes a lot of effort.
  • It's better business for the contractors to get a lock in on the customer with the project and fix bugs in upcoming maintenance projects billed separately.
  • Not having enough time to test in the end means the problems are found late at a time when they are separately billed. Typical guarantee-period is 6 months from end of the project, not from production date.
  • The customer may need to test for days to find a critical issue that takes 0,5 hours to fix. Leaving testing to somebody elses wallet is tempting. Fixing bugs only is cheaper than testing the bugs out and fixing them.
  • There's not one guilty party, but these come from conflicting business models on customer-contractor side, which makes communication and agreeing hard.
  • As solutions, suggestions were twofold: incremental delivery + testing, and a relational contract that would place money for bonus, changes and defects in the same pile, helping mitigate the adversial relationship between the customer and the contractor.  
So, to this background I wrote my blog post.

Leaving bugs is better for contractor business?

Product development is different

My current work is product development. That means, I test for an organization that has the customer and contractor parts in the same payroll. One of the systems I test has external end users, the other has even the end users on the same payroll. For the system with external users, it's clear from logs and reports that given the freedom, the users will not follow any of the paths we assume, but given a crowd, they create various kinds of scenarios of use we may not have originally intended. For the system with internal users, feedback is even more direct, seeing the colleagues face-to-face on the hallways.

This particular work is new for me, I started in April as the first and only tester in our software development. The system with external users has been in production use for some years now, and I find that I would insult my team members saying the system has not been tested. However, results I'm providing indicate there is testing that has not been included, and point of views that have been left out without the extra investment (time & skill) on testing specifically.

My day-to-day project work is finding problems - bugs, as we say. Bug is anything that a user finds irritating, blocking or slowing. Bugs can be missing essential functionalities we never specified before. Bugs can be coding mistakes. They can be mistakes in the way we put our code and somebody elses code together to solve a problem that is ours to solve. They can be errors in how the user thinks she should use the software, in what order or in what pace. We're delightfully unanimous within my teams developers and project managers that it would be a complete waste of time to stop and argue on whether the bug is in requirements, specifications or the code, but we need to address all of them within our limited resources towards the benefits for the users and our business. We don't stop to define if this bug is a change request or defect, and whether it was created this increment or outside some guarantee period. In my team, I'm a valuable member helping the team understand the types of problems we have, and finding problems rather earlier with respect to the segment of customers who find this particular issue a showstopper for anything they are willing to do with us.

Product development is not easy, since there's usually too little time invested in relation to the hopes and wishes that could come true. We're trying to understand the masses, and we have contacts with individuals. But all the challenges I face here, are fun as things go somewhere. So all this as an introduction.

Customer - Contractor projects

Before joining this company, I spent a four-year period in actively trying to understand the customer-contractor projects and testing within them. I worked as an individual subcontractor for contracting organization first, until I moved to take a day-to-day job within the same segment, pension insurance, in a customer organization. In the customer organization I was assigned to support acceptance testing and to define the testing we expected of the contracting organizations. These project's have some weird characteristics of the context that make life harder for testing:
  • A lot of time and energy is wasted when customer and contractor compete in proving that the problem that must be fixed is defects and not change requests. Defects, as per contracts, are something the contractor fixes within the amount already due on the work done, and change requests are paid separately by the customer. I find it amazing, that you can contract a pension calculation system, that doesn't calculate pension, but works "as specified". In this model, as the customer, you pay separately to get the system you intended and the contractor is happy to leave all specification work and responsibility with the customer. Then again, specifying includes a risk, and some may argue that this risk has not been paid for, but the sum due would and should be higher if it was.
  • The customer's testing phase, "acceptance testing", is often the first effective testing providing the results to know if the system will work or not for the intended purpose. This phase often happens, due to other delays, in even tighter timeframe than planned. And the planned timeframe was for acceptance, not testing-fixing-testing cycles. In order to actually test for acceptance in acceptance testing, the full scope of testing should have been in place before this. Full scope meaning both same contents and same quality through right skills. If  the previous testing phases are paced with the point I made first, specification worship, we find in the end just before the production date, that the system doesn't do what it needs to do, but does what it's been specified to do. Many things in software development become tangible only through use - and testing.
  • Defects may be included in the base price set in the contract, but contracts rarely take into account that testing in the way where problems are clearly reported to enable the fixes, may not be. Asking for the testing you need requires skills. It's a myth, in my experience, that not tested means not working. I've experienced systems, that have no separate testers, with quality better than many those tested by "professional testers". Usually these cases have a background of software craftsmanship and entrepreneur-spirit. Testing - by whomever doing it with time and skills - teaches about surprising limitations, all of which will never be included in development. So it is not enough to ask the contractor to test. You have to be able to explain and agree what and how, and in what scope and quality. Low quality testing is testing too, and some people call it testing that you press same buttons in same order over and over again without any rationale that could explain that this may be a good use of the limited budget. With the same use of time you could at least vary things, just a little, and make a difference for the results potential.  Also, many customers, knowingly and deliberately, buy their projects without testing cutting the price 20-30 %, agreeing that the customer will deal with testing and all of a sudden acceptance testing starts with a nonworking system that none looked at integrated.
  • Competitions on the contracts between contractors brings in interesting side phenomena. I find it really hard to make the offers comparable in the contained scope, and contractors ride with their own models to bid for projects clearly under priced, where the actual cash cow is the fixes and changes after the first delivery, billed separately. The first delivery is actively minimized during the project, knowing that change of contractor is not an easy task and continuation with the same one is likely. The contractor may not be able to bring up the hour rate as many customers ask for that already in the first bid, but nothing sets limits to how many hours it takes to do a task in the contractor organization. And hours go up easily when your model includes that you should have separate specialists for talking with customer (requirements), solving the needs (specifications), designing how this technically can be done (architecture), how the change should be done in detail (technical design), how it is implemented (coding), how it's tested technically (integration testing), how it's tested as part of the system (system tester) and who talks with the customer about the schedules (project management). And if this is any bigger, we have teams of specialist, who need someone to direct the teams. Sometimes it just feels we're overspecializing, but this makes sure the amount of hours continues to surprise the customers.
  • The atmosphere of fear and distrust costs a lot for the customer, who is eventually paying. It is, by no means good that the contracting organization would monitor whatever they are doing, and for example be responsible for the test automation that would support the future development (and when you bypass that in the first project, it creates a nice bunch of billable regression testing hours for the future). Any relevant testing activities are expected to be done somewhere else, preferably by some "independent" party that is another contractor in it for the billable hours. When in past I worked for a testing contractor as test consultant, the sales organization sold several fellows to my more recent organization to create test automation to support acceptance testing. It never did more than give a warm fuzzy feeling, maintaining it costs a lot as it broke all the time.  It did not live very long. And having worked in this customer organization, I know the biggest reason for failure was distance. With the same investment given to the organization who was already on the project, there would have been much better chance of success. They might have actually used the automation, and it could have been a part of the criteria of one delivery. I don't get the point of creating and paying for the same testing twice by two groups, with distrust as the rationale. There are other, cheaper ways build trust, and lack of trust makes the overall project fail. You can build trust by agreeing on mechanisms in the contracts, but eventually, it goes down to people and collaboration. Organizational distance between two organizations tends to require the contractual safety net for the tight spots, where the business expectations may not match.
  • External testing organization often make things worse. If two organizations have different business models and goals, the third, focusing in testing may not make it easier. If tester shows her value with the testing results and development contractor loses money in fixing problems without separate pay, and both try to optimize the significant role for themselves, I tend to see chaos and fighting.
After this long rant of problems, I wanted to mention there's also been some good experiences. In projects for my past employee, I had one project (I worked on 5 in 2,5 years up to production) where we did a professional 30 day acceptance test. Our testing team did all we could to find defects and change requests. We failed. System went to production ahead of schedule and under agreed budget. Key to this success was, I would claim, was the excellent collaboration between different roles in customer and contractor organizations. In the testing weekly meetings we used 30 minutes to openly discuss the risk and revealing specific fears of something being wrong. With the collaboration, the contractor, supported by the customer, did a complex change in an existing system, contractor testers tested it as part of the delivery and had got things fixed communicating with the developers on the technical system details. In the background, what made a difference was the setting of the project: project was sold to us at a price that paid the hours and made the margin goals, and allowed the contractor to focus on the essential. A sister project, which had double the scope was sold at 2/3 of the price. The under priced project required significantly more steering (fighting) effort and the work atmosphere never turned to the productive good level the other one had. There was also a significant difference in testing and negotiation skills for the named representatives in the contractor side.

A buggy system can be a money making machine and be better for the contractor's business financially, as long as this is the way of the trade for all large contractors. I find that the methods the large contractors use optimize this. But the customers are to blame too. What if customers would settle for time-and-material contracts, and that you can't allocate the risk of your software to some other organization without a significant added cost, and would place more checkpoints - actual incremental deliveries - more often in the calendar. And this still would allow the customer and the contractor to agree on a goal price, and a reward/risk money, that would be used for the flexibility to get the right system. Sounds to me a lot like a suggestion towards agile methods - especially when put together with the pain of too many people for too much specialization to avoid the actual responsibilities.

No comments:

Post a Comment