Superpowers & obsessions
I recently saw a tweet from Pradeep Soundararajan claiming to have a super power & an impressive one at that! “I have a superpower to find bugs even before the humans involved in the project put it in but you wouldn’t believe me so I don’t demonstrate it”. Very impressive, sadly I can’t claim to be as powerful as Pradeep but I do have a couple of tricks up my sleeve which I’ve used to prevent defects leaving developers PC’s, something I refer to as proactive testing, which I’d like to share with you
I used to be obsessed with hunting down bugs. I still enjoy it just as much now as I did back then but now I see there are cheaper ways to find the bugs. If you think of a typical defect life cycle it could be some time before a defect is picked up to be fixed. Sometimes it’s never fixed & just joins that pile we don’t like to talk about. However what if you jump into the process a little earlier when the developer is still in the process of adding his final touches to his work, you could test then surely? It’ll be fresh on their mind & I’m sure they’d be much better placed to fix any issues at that point than a day, week or month later when they’ve moved on to other work. So that’s what I looked at, ways to jump in a little earlier.
Proactive testing: Technique 1
I’ll start of with a very simple one which will be common sense to most of you. Test conditions created up front, prior to any code being written. Internally I call this “Up front test cases”, some of you may refer to these as Acceptance Tests, whatever you want to call it is fine. The main point of these is that you create a list of test conditions for developers to check prior to completing a piece of work. This has worked very well for us. You do have to spend some time at the start of a project chasing developers to make sure they are running them but the results can be fantastic.
What I love about writing these up front is that it’s much like testing your application. Your mind goes crazy thinking of idea’s for new test conditions & while you’re writing these you’re also doing a form of requirements validation as you’ll spot gaps & potential issues. No doubt you’ll have previously reviewed & fed back on these requirements anyway, but you’ll be surprised how many extra things you notice once your brain is in full blown test mode.
Proactive testing: Technique 2
My next one works really well for us & has helped reduce defect numbers considerably, not just from the exercise of doing this but also from providing awareness of the types of testing a tester will do.
So here’s a little background on our team’s process first. We work from a story board, on this board we have a bunch of work packages which make up key parts of a feature. With each work package having several story cards describing what’s required to complete the work package. As developers complete cards they make their way across the board, before finally ending up in a completed column, which for us means it’s been verified & validated by both the test team & the stakeholder. You don’t have to work in a similar way, when you read on you’ll realise with some thought this technique can apply to any process.
The idea is that when a developer finishes a story card they do what we call a “show and tell” to a tester. They’ll sit down at the developer’s pc & they’ll demonstrate the work they’ve just done, the tester will then begin asking questions & exercising tests on this work. There are some key benefits to this:
- Code has not reached the product’s code base yet, we do this prior to committing so it negates risk and reduces cost.
- Testers often get knowledge they wouldn’t have gained from verifying it themselves.
- Over time developers learn how a tester works and begin coding to prevent issues being found at these.
- It’s very cheap; we don’t need to let a defect enter the defect life cycle. Developers note the issues and fix them prior to handing the story over for verification.
- It doesn’t cost either party too much time, but adds a lot of value.
Proactive testing: Dropping in a metric
So when I first tried these out I had some trouble making sure the developers ran the test conditions I’d written & done their show and tells for each story card. It improved over time but as soon as I introduced a metric for measuring the value of these proactive techniques it rapidly improved to the point were every developer was doing both.
So the metric is very simple “Avoidable Defects”, I measured this by taking note of defects I’d found in a spreadsheet. Along with columns for data I gathered on each defect I added another two: TC (test conditions) & ST (show and tells). For each of these I’d place a Y or N to determine if the defect could have been avoided from running either the test conditions or by doing a show and tell. The figures started of pretty high at 48.5% but quickly levelled down, any time they jumped we could identify the reason & quickly feed that back to the team.
This release we have used both up-front test cases and show & tells. We have also included the metric for awareness. The results have been fantastic! For a previous releases we’d usually have a high defect count which would be a bottleneck to the team, they’d struggle to maintain it to a respectable level, even when we’d factored out what wasn’t essential to fix. Our open defect numbers have never gone over twenty; very often we struggle to hit double figures. Twenty was our threshold for just low severity defect last release, something which we’d struggled to keep within limits. So you can see how things have improved. The awareness has also meant defects you’d have expected to find previously are uncommon now, over time I’m sure this will improve even more.
So currently this has only been trialed in one team, our plan now is to roll this out to all our feature teams. Hopefully it’ll be as much a success as it’s been for my team when we do this.
One other trick I used to cut down repetition when creating test conditions, was to write checklist’s for common checks. These were pinned to our teams board & could be used in conjunction with the up front test conditions/cases. This, along with making your test cases lean, cuts down a lot of time in their generation. I’ll talk more about creating lean test cases in the near future & the tools I used to make these more efficient.
Proactive testing: Learning from others
So these are the techniques I’ve tried out & they appear to be working well, I’m sure others out there do other form’s of proactive testing. Stephen Hill for example discussed with me previously how he tested new API’s. His approach is very much pro active, in that he generates a test harness to test methods on the API’s prior to developers or customers attempting to use them in similar ways. Although at this point the API is in the code base he’s still taking a proactive approach by providing that early feedback which reduces cost on the development process. We’d discussed in our team how to push this back further, prior to the API’s being written, I’ll leave that chat for another time though as it’s just at the idea stage currently and hasn’t been trialled.
So that’s it, two cheap ways to reduce defects going into your product. Hopefully you’ve already tried something similar or have other techniques to share, if you do please share away I’d love to hear them.
No related posts.