Some background
Two weeks back Zeger Van Hese refreshed something in my memory which I’d almost forgotten about. A simple little testing technique, which in itself can be very powerful & often essential in my honest opinion.
So what is it? Well it’s very simple really, people make claims about software all the time; our job as testers is to confirm, dispute or reject those claims. You’ve probably done this many a time and not even realised it, I know I have. Up until now I’ve never noted it down as a technique to add to my testing model.
Now I’ve heard about claims testing three times now in the past three months. Once in James Whittaker’s book on Exploratory Testing, another time during the Rapid Software Testing training course I’d attended by Michael Bolton when he’d described the FDSFSCURA heuristic mnemonic and finally when I’d participated in the first ever European Midweek Testing session & Zeger Van Hese showed how he’d used it in his testing approach. Now although I’d valued it as a technique, it wasn’t until I saw Zeger show how powerful it could be in his approach that I thought, yeah I really should start using this all the time.
Identifying claims
So how do you find these claims?
Documentation
Well an excellent place to start is your products documentation. Really just pick it up and start identifying what claims it makes. These could be as simple as a compatibility list. Does it really work in that database claimed that no one has ever tested? Oh that browser we claim to support hasn’t been tested either, we still use the previous version. If you’re clever you’ll skip the reading part altogether and sit down and talk with someone on your documentation team who’ll probably be able to point you in the right direction reading wise.
Demo
Most companies have a demo team; go talk to them & see what claims they have or will be making about your new features. The team I’m currently working in is developing a new key product feature, which hopefully will make our company lots of money. Our demo team have been sitting anxiously waiting to get their hands on this to start showcasing it in their demos. They’ll have some claims we’ll need to be weary of, something which I’ve scheduled in to be looked at.
Sales
Likewise your sales team much like your demo team will be making all sorts of claims. It’s probably best to even just ask how they plan to sell this to customers, from that itself you’ll gain many valuable insights. If you think of the context of a claim it might be ok for your sales team to claim your feature can do something that it currently can’t. Why? Well the feature may be extended for that customer by another team. If you’d missed that valuable insight your sales team just gave you then let me rephrase. A sales team’s claims can point to key extensibility scenarios that your feature should be capable of handling.
Proactive claims testing
A common problem in lots of products features is that they’re developed to fulfil a requirement. Now requirements change over time & that’s when the extensibility of a feature can become a problem. Sure anything can be extended with some work. Should we have to invest ridiculous amounts of time to do that though? If it’s developed in such a way that extensibility is fairly easy, or at least not made difficult! Then we’ve saved our self’s a lot of time & money from some thoughtful consideration up front.
If we as testers can communicate prior to development of a feature with our sales team, we can find at least some of those extensibility points for free, from a quick chat. Even better still we can influence design by simply asking if those points will be extendible? I find myself that even just asking the people working on the feature how they plan to make it extensible helps iron out these problems early at a much lower cost.
You can look upon your sales teams claims as a unique opportunity to do some cheap proactive testing.1
I’ve talked more about this than I’d intended to, so I’ll pass in another place to look for product claims before wrapping this post up
Development
Development claims. Those do happen right? Sure they do! Once I remember my feature team lead at the time being cornered for a statement from our CEO on how much faster a new developer tooling feature would make developing web pages for our deployment teams. He eventually gave in & said a figure of around 40%. Yes, a very bold claim! One that we later had to prove & luckily enough his magic number, which I’m sure he’d debate, did turn out to be around 40% faster Obviously there was no trickery involved here & this was of course a very valid experiment.
It’s easy to forget the impact a claim from the development team to others, be that your CEO, customer, management and so on, can have. It can be a struggle to keep track of them, and really you just have to try your best; read all your emails, talk to people on your team, communicate with your customers & stakeholders regularly.
So without going into this too much I think you can see how powerful a technique Claims Testing can be. Nothing new, however hopefully I’ve provided you with a few insights on how to make good use of it & I hope you’ll provide me with a few as well. Thanks for reading
- A technique to help prevent issues from entering code in the first place.. Proactive meaning you provide methods to test prior to any code being committed into your products code base, as opposed to reactive. [↩]
Related posts:
Darren, thanks for the honorable mention. It’s nice to see that something I did or said can inspire someone to write a much more inspiring blog post. When encountering puzzling problems while testing, I use MB & JB’s HICCUPPS/F heuristic to see whether I can filter out the real “bugs”. One of the C’s in there is “Claims”. I encountered claims violations regularly, so I decided to make ‘claims’ the first thing I check when getting my hands on a new piece of software or product. In situations like Weekend Testing, within that very limited time-box, it’s a frugal thing to do. Best bang for the buck. And the problems being revealed are important too, since not living up to created expectations can make or break a product/company.
– Z
Hi Zeger,
HICCUPPS/F is a good generic heuristic mnemonic to use for any projects. Hence why it’s so useful for weekend testing sessions.
I think for me seeing it being used in action encouraged me to add it into my own model. I’m glad I did & I’m glad we both attended the midweek testing session to encourage me to do that
Cheers,
Darren.
Nice, Darren. Now, if I could figure out how to get the sales guys to make the same claim twice then the task of testing the claims they are making to customers would be easier.
Hi Pete,
Thanks for the comment, they sound like a slippery bunch your sales team
If they’re making multiple claims perhaps it’s not a bad thing? Perhaps it just allows you to identify more extensibility points for free? Allowing you to do some good design up front.
Cheers
Darren
Hi Darren,
Great post. I’ve done my fair share of claims testing in the past on a variety of products. I’ve always found hardware support and performance to be the two with the most miscommunications/misconceptions around them.
One thing I have seen in the past is results and feedback from development teams being converted in to claims. So a reverse style of claims testing where some good performance results have lead to a claim that our new product is X times faster (as an example).
This is great, but only if we are careful to communicate our findings with this in mind. I’m always very careful to think about this when reporting testing and results. Just because we did 3 tests on browser X doesn’t automatically mean we now support browser X, yet sometimes management, marketing or sales can take this to mean they can sell/advertise it.
One of the best ways I found for finding out claims is to sit in on Sales meetings and training sessions where you hear first hand the capabilities of the product and what it can/can’t do; especially useful if these claims don’t meet reality.
Really great post.
Thanks
Rob..
Hi Rob,
Some excellent stuff here thanks. Some things that could possibly help which I’m sure you probably already do:
Hardware & Performance requirements, can be chased at the start of a project with the customer, locked down & agreed upon in your specification documents.
In regards to development claims it can help to have a support matrix documented. Examples being minimum supported hardware matrix, browser support matrix, recommended hardware specifications documented.
Test claims is another good one, and I love how you go into the whole aspect of communication & better reporting with this. As a test claim might not be one that we have stated, it might just be one like you say that’s been taken from what we’ve reported by others.
I think the whole aspect of indirect test claims & better reporting, could make a very good article on it’s own, there is a lot you can talk about here. In fact the whole aspect of better reporting & communications is something I’m currently trying to push in the right direction just now.
I like your idea of sitting in on sales & training meeting/session, I’m going to put that on my to-do list, excellent! Thanks for your comment, some very nice thoughts
Cheers,
Darren.
Hi Darren,
Good post! One trick of mine is to listen very carefully to statements made by developers to get buy-in for some whizzy piece of new hardware or software because the claims about what the software/hardware can do can be wildly off the mark.
I tend to listen out for dismissive tone of voice or apparent exaggeration for effect then ask ‘why?’ or ‘how?’ questions. I then drill down further until either I am satisfied that the claims being made are justified or there is an admission that more work is needed or they don’t know something.
With sales and/or marketing claims I often ask ‘how do you know that?’ and drill down further from there. I use a similar technique for establishing validity of testing claims too (and yes, that often means I’m questioning myself).
In a lot of cases, of course, dynamic execution of the product is needed and I usually help with that.
Good stuff!
Thanks,
Stephen
Hi Stephen,
Excellent insights & tips, I think you’re displaying first hand how to harness to power of communications to not just validate claims, you’re showing how to gain more valuable information by simply drilling down a claim with some follow up questions while gauging the response to these. I think that’s brilliant, I really do! Something I should certainly do more often, not just for claims; in all aspects of communication.
Good stuff, thanks for sharing.
Cheers,
Darren.
Nice post!
One thing which stood out from the post and the comments is that no one explicitly mentioned about company website, news articles and maybe online sources of information.
Are we using different terminology for the same thing?
Regards,
Ajay Balamurugadas
Hi Ajay,
You’re correct, no one did. My intent was to provide some insights to get people started & thinking about Claims Testing more. The one’s you’ve added are excellent & in my honest opinion should be included in any projects test basis. Marketing claims are just as valuable to consider as you’ve clearly shown, thanks
Cheers,
Darren.
The term ‘claims testing’ is widely used in food and drug manufacturing which is strictly regulated by trading standards and any claims made have to be verifiable which, in my opinion, should apply to software products. Therefore, claims and special claims testing is carried out to ensure that any claims such as performance are accurate and meet with Customer expectations.
Correction: It seems that you started writing the blog as early 8 December 2010 and not 2 weeks ago as you claim.
The first post that I came across on Claims Testing was from Curious Tester (Parimala Shankaraiah) re: http://curioustester.blogspot.com/2009/12/claims-testing.html
Requirement documents are written by technical people so it is bound to have some bias towards claims. Whereas current agile developments that follow specification by example philosophy generate minimum and to the point documentation so there is unlikely to be over claim anyway. Even the features are demoed by the feature team so they would not claim over and above which is built. Sales people always always have a tendency to over claim so no surprise there. There is a culture shift so in my experience claims testing would take a back seat but it is something that should not be overlooked.
Hi Mohinder,
Thanks for the comment
I think how much scrutiny we give claims is dependant upon the context. For example it’s common practice for many companies to display the illusion that they have more functionality than they currently have to potential customers during demo’s. Is it our job to highlight the risk here? Possibly? It could be though that the company has always been aware of this risk and just sees this strategy as a way of getting in front of the competition. However if our role was to oversea the demo’s development prior to being demoed to the potential customer, perhaps our mission would be to minimise potential for embarrassment by highlighting potential gotcha’s that a customer may enquire about; giving the team the chance to mock out more functionality in those area’s.
So enforcing can be essential at times, but in other contexts less important.
I agree that claims testing (at least in my experience) tends to take a back seat, I’d assume in tight, detailed, contractual projects it becomes more essential.
Thanks for taking the time out to share your thoughts Mohinder
Cheers,
Darren.