Superpowers & obsessions
I recently saw a tweet from Pradeep Soundararajan claiming to have a super power & an impressive one at that! “I have a superpower to find bugs even before the humans involved in the project put it in but you wouldn’t believe me so I don’t demonstrate it”. Very impressive, sadly I can’t claim to be as powerful as Pradeep but I do have a couple of tricks up my sleeve which I’ve used to prevent defects leaving developers PC’s, something I refer to as proactive testing, which I’d like to share with you
I used to be obsessed with hunting down bugs. I still enjoy it just as much now as I did back then but now I see there are cheaper ways to find the bugs. If you think of a typical defect life cycle it could be some time before a defect is picked up to be fixed. Sometimes it’s never fixed & just joins that pile we don’t like to talk about. However what if you jump into the process a little earlier when the developer is still in the process of adding his final touches to his work, you could test then surely? It’ll be fresh on their mind & I’m sure they’d be much better placed to fix any issues at that point than a day, week or month later when they’ve moved on to other work. So that’s what I looked at, ways to jump in a little earlier.
Proactive testing: Technique 1
I’ll start of with a very simple one which will be common sense to most of you. Test conditions created up front, prior to any code being written. Internally I call this “Up front test cases”, some of you may refer to these as Acceptance Tests, whatever you want to call it is fine. The main point of these is that you create a list of test conditions for developers to check prior to completing a piece of work. This has worked very well for us. You do have to spend some time at the start of a project chasing developers to make sure they are running them but the results can be fantastic.
What I love about writing these up front is that it’s much like testing your application. Your mind goes crazy thinking of idea’s for new test conditions & while you’re writing these you’re also doing a form of requirements validation as you’ll spot gaps & potential issues. No doubt you’ll have previously reviewed & fed back on these requirements anyway, but you’ll be surprised how many extra things you notice once your brain is in full blown test mode.
Proactive testing: Technique 2
My next one works really well for us & has helped reduce defect numbers considerably, not just from the exercise of doing this but also from providing awareness of the types of testing a tester will do.
So here’s a little background on our team’s process first. We work from a story board, on this board we have a bunch of work packages which make up key parts of a feature. With each work package having several story cards describing what’s required to complete the work package. As developers complete cards they make their way across the board, before finally ending up in a completed column, which for us means it’s been verified & validated by both the test team & the stakeholder. You don’t have to work in a similar way, when you read on you’ll realise with some thought this technique can apply to any process.
The idea is that when a developer finishes a story card they do what we call a “show and tell” to a tester. They’ll sit down at the developer’s pc & they’ll demonstrate the work they’ve just done, the tester will then begin asking questions & exercising tests on this work. There are some key benefits to this:
- Code has not reached the product’s code base yet, we do this prior to committing so it negates risk and reduces cost.
- Testers often get knowledge they wouldn’t have gained from verifying it themselves.
- Over time developers learn how a tester works and begin coding to prevent issues being found at these.
- It’s very cheap; we don’t need to let a defect enter the defect life cycle. Developers note the issues and fix them prior to handing the story over for verification.
- It doesn’t cost either party too much time, but adds a lot of value.
Proactive testing: Dropping in a metric
So when I first tried these out I had some trouble making sure the developers ran the test conditions I’d written & done their show and tells for each story card. It improved over time but as soon as I introduced a metric for measuring the value of these proactive techniques it rapidly improved to the point were every developer was doing both.
So the metric is very simple “Avoidable Defects”, I measured this by taking note of defects I’d found in a spreadsheet. Along with columns for data I gathered on each defect I added another two: TC (test conditions) & ST (show and tells). For each of these I’d place a Y or N to determine if the defect could have been avoided from running either the test conditions or by doing a show and tell. The figures started of pretty high at 48.5% but quickly levelled down, any time they jumped we could identify the reason & quickly feed that back to the team.
This release we have used both up-front test cases and show & tells. We have also included the metric for awareness. The results have been fantastic! For a previous releases we’d usually have a high defect count which would be a bottleneck to the team, they’d struggle to maintain it to a respectable level, even when we’d factored out what wasn’t essential to fix. Our open defect numbers have never gone over twenty; very often we struggle to hit double figures. Twenty was our threshold for just low severity defect last release, something which we’d struggled to keep within limits. So you can see how things have improved. The awareness has also meant defects you’d have expected to find previously are uncommon now, over time I’m sure this will improve even more.
So currently this has only been trialed in one team, our plan now is to roll this out to all our feature teams. Hopefully it’ll be as much a success as it’s been for my team when we do this.
One other trick I used to cut down repetition when creating test conditions, was to write checklist’s for common checks. These were pinned to our teams board & could be used in conjunction with the up front test conditions/cases. This, along with making your test cases lean, cuts down a lot of time in their generation. I’ll talk more about creating lean test cases in the near future & the tools I used to make these more efficient.
Proactive testing: Learning from others
So these are the techniques I’ve tried out & they appear to be working well, I’m sure others out there do other form’s of proactive testing. Stephen Hill for example discussed with me previously how he tested new API’s. His approach is very much pro active, in that he generates a test harness to test methods on the API’s prior to developers or customers attempting to use them in similar ways. Although at this point the API is in the code base he’s still taking a proactive approach by providing that early feedback which reduces cost on the development process. We’d discussed in our team how to push this back further, prior to the API’s being written, I’ll leave that chat for another time though as it’s just at the idea stage currently and hasn’t been trialled.
So that’s it, two cheap ways to reduce defects going into your product. Hopefully you’ve already tried something similar or have other techniques to share, if you do please share away I’d love to hear them.
No related posts.
Darren,
Nice article. I’d like to discuss your defects spreadsheet that you use for your avoidable defects metric with you sometime if you don’t mind.
I like the idea of the show and tells with the testers. Its very powerful as a technique even if the tester does not have much (or any) programming experience because the programmers is forced to think through his/her logic on the fly.
Thanks for the mention too
Stephen
Hi Stephen,
I’d be very happy to share it with you, it’s excellent not just for finding defects but also for the extra knowledge the tester gains via communication with the dev’s.
Drop me an email and I’ll post it when I get back into the office. Thanks for the comment
Cheers,
Darren.
Hi Darren,
Awesome idea. I like this a lot. It’s great that your reducing the cost of bugs by getting your tests to the devs earlier in the cycle, these test ideas. Brilliant stuff cos you’ve reduced ambiguity about the feature and the testing to be done.
I feel I might copy some of these ideas.
Rob..
Hi Rob,
Please let me know how you get on & if you come up with any other ways to be proactive.
Cheers,
Darren.
Excellent stuff. I would love to work with such a team in future. Many contexts, I am brought in are not to do with defect prevention but defect detection. Being in too many such contexts could be a trap – My clients and my happiness could only be based on finding a lot of defects. When moving a person like me to a context you mentioned above, I would have to re-think about the idea of success.
Being a (good, ahem) consultant, I would probe what my clients want to achieve, why they want to achieve that, propose alternatives and help them with what would suit them. If you were my client, I wouldn’t say, “Hey, I am good at finding tonsa bugs, so lets allow the developers to put in because anyhow we are going to catch them”.
Such an attitude has let developers to actually write more bugs than code. In one of the project I said to my client, “Every line of code appears to have at least one semi colon and two bugs”. The culture amongst developers was, “hey, so what, testers are going to find that. If they don’t find its not my fault”.
No matter a great testing (Er, I mean great bug finding) was done, it didn’t really help until we testers sat together with developers and buddy-tested the code before it was checked in.
Hey, BTW, what if you do all these things, you still don’t have the super power I have
;-P
One of our testers had a similar problem in the past, he’d attempted to verify a fix made. It didn’t work, so he spoke with the developer who’d fixed it who said “Oh really? Try it again now it should be working”. This chat went on for a bit, before the tester on his fourth attempt of verifying this fix asked the developer “Did you actually test this before you committed it?” “No that’s what your for isn’t it?” Replied the dev. Thankfully he’s now testing his fixes.
Hi Darren,
Good Post, some good ideas and I’m glad it’s working, I’m with you big time on the proactivity. I’ve also found that programmers can help me lots without knowing they’re doing it. Could that be testing by another name?
The ‘show and tell’ is brilliant!
I love walking through stuff, I have also found in my experience that if a programmer thinks they found an issue themselves they are more likely to fix it..even if you are the one that subtly prodded them in the right direction through some proactivity.
Also do you get to peak over their shoulder to look at the code?
Peter
If it’s a technical task the show and tell could be to review what automated testing they’ve written around their code. It doesn’t have to be technical to look at the code though, you can just ask. Even if you don’t know code you can still find things, just ask the programmer to describe what it’s doing. He might mention some code he’s written to do something that you think may be re-usable in the future elsewhere, cool lets make that code re-usable.
Thanks a lot for the nice comment, I’m glad you like it
Interesting post!
I’ve seen a type of combination of techniques 1& 2 – we call it “system play” – before, during or after coding we’ll discuss typical call or use case flows – between subsystems or systems. There are reps from different systems:
Person A: Ok, we’ll send you this piece of information X
Person B: Yes, we’ll pass this on and reply with Y
Person A: Wait, we need Y + Y’ – ah, there’s a potential issue…
Before coding this might be to flush out inconsistencies and after coding we might use it to find target areas for some of our testing – exploratory targetting.
On the metrics side, the figures will always be a little subjective – did my test condition really prevent that problem or would it have been caught in a different way during coding, or would the developer have come up with some variant of it anyway, just a little later…
Being proactive is always good in my book – even if it’s difficult to measure. You can still communicate this to your boss/stakeholders – so that they know the impact/input you/the tester is having, even if it isn’t always tangible.
Simon thanks for the insight into your methods, very interesting. A have a few questions for you:
-Who is involved in this process?
-Do you come prepared with use case flows or make them up on the spot?
-How often do these happen?
Agreed on the metric, I’m planning on doing a follow up post after discussions with a couple of people, to add clarity to the intentions of these techniques & how they should be handled along with the intent of the metric.
Good questions Darren.
Who’s involved? It depends…
Pre-development it’s usually just system designers/architects (although I want to get the testing perspective in there) – this is really to get the major inconsistencies out of the way, making sure that no /big/ thing is overlooked. This is the one we’d call “system play”.
During development – testers, designers, system arch – the purpose here is to help the testers think through (or come up with) some test ideas. It might start out as a presentation – this is how it works – then it goes into a /what if/ discussion or brainstorm. These have been useful for knowledge spreading and identifying some risk areas to test (especially where you get differing opinions about how something works, should work or don’t know.) Very useful to discuss feature interactions.
The areas identified might equate to ET test charters.
These happen once/twice per certain features (complex ones) – but it’s something under review.
Use cases might come from a customer, a standard, a feature spec, an idea or a previous fault.
I like it Simon & we’ve already begun (thanks to you) looking at ways to do “System Plays” with our requirements. Thanks for sharing
Nice post
I had implemented something similar to Technique 1 in my previous organisation. It was basically some conditions for the common tests we perform in a web page. The developers used it for their initial verification and we testers could really concentrate on breaking the stuff It was good..
We didn’t quite follow the “show and tell” stuff due to the effectiveness of the above approach
Keep writing!
Hi Nandagopal,
Excellent, I’m glad it worked out very well for you. Often the struggle is just getting the buy in from the developers in the first place, hence why good communications skills are always key in successful testers.
If you try anything else out pro-actively please do let me know, I’m always keen on hearing about new idea’s & peoples experiences using them.
Cheers,
Darren.
Hi Darren,
Do you have any more information on how you measure for awareness of testing types?
My initial impression was that awareness of testing types appears to be part of the show and tell, which would then mean any issues raised at this point wouldn’t get as far as the spreadsheet?
Thanks in advance,
Colin
Hi Colin,
Good question!
Everything makes it onto the spreadsheet for me. Even if those issues are found from a show and tell, I still take note of them so that I can class these as avoided defects & then produce a more accurate measurement for that weeks avoidable defects.
The awareness of testing types isn’t directly related to the show and tell as such, it’s really down to that tester’s mental testing model. I talk about models briefly here. What it does do is raise awareness for these types of testing that tester does, so hopefully after time developers will be aware of these and code to prevent these issues from happening.
Likewise it also raises the whole awareness aspect of quality. For our team it made us all strive for the same goal, the guys seemed to enjoy trying to get the avoidable defects number down. They appeared genuinely upset if it spiked one week & would actively seek out why this had happened.
Another good thing around awareness of testing types is that from noting down those issues and classing them into different testing types, you can then produce stats per week to feed back to your team. Providing awareness of trends often helps uncover much larger problems sooner & is quick, low cost feedback. Again its awareness and not control, the minute you see management take hold of stats & use them as control metrics, make them aware of the potential damage this will do.
Hope that helps, if not let me know & I’ll see if I can do better
Cheers,
Darren.
Hi,
We use a very similar mode of testing.
We are working on a model/process which prevents defects even before they get a chance to get built in.
What kind of challenges have you faced in ‘Show & Tell’? What is your technique? Do you ask lot of questions and do mini explorations with the developer?
Hi Panna,
Apologies for the late reply.
Firstly aim to learn about what the developer has worked on, you’ll often find out things that weren’t written down in documents, or communicated. This will give you more scenarios to test when you come to formally test the feature.
As the developer begins demonstrating their work, you’ll be able to start asking questions about thst feature, perhaps asking them to demonstrate how they’ve coded certain aspects of it, what testing they’ve done themselves, where they think it is most prone to failure and so on.
When running quick tests, don’t take over the developers PC, instead if you want them to test something, explain it to them, and get them to run that test themselves. This way their actively learning, and will more likely remember to check that themselves next time if it fails.
I find 5-10 minute show and tell work best. You’re not there to formally test the feature, you just want to run quick tests and gain a complete understanding of what you’re about to test.
Ping me back if you’ve any more questions.
Thanks,
Darren.
Hi Darren,
NO doubt the ideas(Upfront test cases and S&T) that you write in the blog are fantatstic. But i have small doubt here.
Let’s say if Dev team is on Onsite and Testing is on Offshore..Geographical diff. Then in this case, what could be the alternative you will provide ?
Thanks
Anurag Raghuvanshi
Hi Anurag,
Hopefully there will be a time window in which both the onsite and offshore team are in the office. You could use screen sharing tools, they work very effectively to do remote show and tells. Team Viewer works well for this.
Up front test cases will certainly help. More generically, perhaps you could create a checklist for developers to run their work against.
In my current company we have three main implementation stages which we use generic test checklists for: design, front end development and back end development.
These work well, as they provide a generic higher level of tests to check your work against.
Our design team really likes them, and use them heavily. I’ll need to blog some examples soon.
Thanks,
Darren.