Week Night Testing: Requirements analysis & testing traps

Summary

In the following post we’ll discuss testing traps, requirements analysis, project risks, mind mapping, and observational bias among other things, in what was the fourth Week Night Testing session that I got the opportunity of facilitating.

Background

I’ve always found requirements testing great fun.  There is something about it which gives me much more of a kick than testing any developed functionality would.  Perhaps it flexes my brain cells that little bit more; forcing me to quickly build up a model of the proposed requirements and provide rapid feedback in terms of gaps, conflicts and potential issues, whilst also keeping an eye out for anything that could be done better or hadn’t been considered yet.  It’s fun!  It really does get you zoned in, and before you know it, hours have quickly passed whilst you’ve been enjoying poking holes in the specification.

Anyway, I’d attended a few Week Night Testing sessions already, with my last giving me the chance to pair with the great Lisa Crispin (lucky me!).  I’d thought that running a session dedicated to testing requirements could be fun and beneficial for everyone.  It’s a key skill to have as a tester, and something not everyone gets the opportunity to do.  It’s also proactive, and as you all know being proactive is something I’m very passionate about.

So I put myself forward to facilitate session number four which was held on the 26th January.  I’d decided I would come up with an idea for a new feature for a CRM product and write up a high level business requirements style document for it.

My fictional company CRM-R-US was born!  I’d spent a couple of hours drafting up requirements for the social media integration of this CRM-R-US product with Twitter.  The requirements were littered with traps, which I’d hoped most would pick up on.

Actually writing them was a lot of fun.  It really got my imagination going, and a few people thought the proposed feature was a really good idea.  I’d agree, although I think it needs a lot of work!  We’ll get into that later though.

The session begins

The turnout for Week Night Testing session number four was excellent!  I was really pleased that so many had turned up for the session.  However it did give me the difficult task of keeping up with the barrage of chat.

Adam Brown, Adam Yuret, Anand Ramdeo, Aruna Venkataraman, Ben Klaasen, Brian Osman, Del Dewar, Lisa Crispin, Mike Scott, Mohinder Khosla, Phil Kirkham, Prem Ranganath, Rakesh Reddy, Richard Hill, Sharath Byregowda, Tony Bruce, Vamshi all turned up for what was going to be a daunting and difficult task of providing feedback on my trap littered requirements.

My role in all of this was to play a stressed out Business Analyst who’d been tasked with writing up these requirements and getting key members of CRM-R-US to agree upon them.  These were the initial first draft proposed requirements, so testers had the fun job of dealing with vague descriptions and no visual indication of how the end product would look.

If I’m honest this is my idea of fun!  I love these kinds of requirements because I can provide lots of feedback quickly and rapidly work towards progressing them into a more stable state.

The mission

The mission was simple, participants had to review and provide feedback on the initial requirements to help identify any gaps, risks or potential issues with them.  If they wanted to help a little more they could provide suggestions for improvement as well.

Requirements and chat transcript

If you’re keen on seeing the requirements you can view them here.  You can also read the sessions Skype chat transcript here.

Introducing meetings

To help deal with a potential barrage of questions, people were required to book a meeting (separate Skype chat room) with me, from which they could ask me further questions which might help them complete their mission.  There was a trap in here, but we’ll get to that later on.  Meetings were limited to five minute slots.  Only seven out of the seventeen participants booked meetings and of those only a few asked questions which could have provided insights to help speed up their mission.

Everyone went solo on this mission except for Lisa Crispin and Mohinder Khosla who had previously arranged to pair during the session.

Wireframes and bias

One of the first problems people had to overcome was the fact that the document had no wireframes, which is typical of initial business requirement documents.  Some questioned the need to have them if we were only providing feedback on initial business requirements; others stated how they’d help create a quicker understanding of the feature.

Phil Kirkham raised a good point when he asked “Since the application is in the specification stage; are you testing flow and usability?  Or are we finding if there are any gaps and risks before we even start building?”  The mission was to provide feedback on these initial requirements. Identify gaps, issues, risks and you could help some more by providing suggestions for improvement.

At this stage we have an idea of the business needs.  Needs that will quickly change before leading to more grounded requirements.  So wireframes would be an expensive task at this early a stage.  They can be cheap if quickly drafted but would be expensive still, in that they’d require constant adjustment.  Sure they’d be helpful, but aren’t entirely necessary when we are still unclear on the end feature that we will actually build.

Providing wireframes at such an early stage of the requirements gathering process runs a risk of observational bias.  The streetlight effect sums it up perfectly in that by paying so much attention to something which helps us understand the requirement more clearly, we are blinded from the places we could have been looking in.  It does essentially allow for risks to be missed or key gaps to go unnoticed.  That’s not to say we shouldn’t provide them, but we need to keep ourselves aware of potential bias, so that we can negate the risks wireframes bring at this early stage.

Ben Klaasen decided to draw up a wireframe for the presence dashboard which was cool.  I’d be interested in hearing his thoughts on how this helped him in terms of meeting the mission.

Wireframe mockup by Ben. Click to view

Just before we jump off the topic of wireframes I’d like to include a link to a product called Balsamiq Mockups which helps you design wireframes quickly.  Mike Scott provided this, and I think it looks very promising.

Summary of the first hour’s discussions

Before I knew it questions were flying in think and fast!  There were mostly around the vagueness of the requirements.  I’ll provide a summary of some of the best feedback provided during the testing chat session, before moving onto people’s session reports.

Non functional requirements

  • Richard Hill was one of the first to ask if there were any non functional requirements.
  • A few people asked questions probing to the limits of the feature and specific areas of it.
  • Richard Hill was one of the first to ask about language and character sets supported.

Lack of information

  • Sharath Byregowda was one of the first to ask for more detailed information on specific requirements.
  • Lisa Crispin was one of the first to ask for examples.
    • A perfect question here would have been “Is this all the information available?”
  • Tony Bruce was one of the first to ask for a solid use case for a section of the feature.

Terminology

  • Prem Ranganath stated “While this doc is called a BRD, the tone seems more like an SRS focused on system realization”.  Myself, I’d ask how many companies actually understand requirements gathering; much like testing, most don’t have a common understanding of terminology or styles.  I agree this is some way from a standard BRD, it’s even further away from an SRS though.
  • Lisa Crispin brought up the fact that the document states it’ll have use cases to explain the requirement yet none exist.  Well spotted Lisa.

Scope

  • Adam Brown questioned what out of scope requirements meant to the project.
  • Tony Bruce asked what happened to the MOSCOW prioritisation of requirements.  Well spotted Tony.

Environment

  • Adam Brown was the first to uncover that the application was web based and immediately begun looking into non functional requirements that might be needed here.

Risks

  • Ben Klaasen was quick to notice that campaigns played a major role in the feature.
  • Brian Osman was quick to point out a dependency in the campaign engine with another not yet developed feature.
  • Brian Osman again on campaigns “there seems to be a huge risk as the section calls the campaign engine a “vision”, – hmmm – business speak for I have an idea but…
  • A key idea behind the feature is to quickly build up a mass following for your online profile via a leads engine.  Richard Hill was first to spot a major flaw in the this, when he asked “Why would leads follow us back?”
  • Brian Osman spotted something peculiar in a couple of requirements and asked “What’s an intelligent mechanism?”  Well spotted Brian, that’s business talk for it should just work!
  • Tony Bruce brought legality into the equation when he asked “Is this legal?”  Something we should always check when storing information on a customer and using 3rd party software.
  • Phil Kirkham used external sources to aid his approach to completing this mission asking which of the top 10 twitter common campaign mistakes could potentially occur with this feature.
  • Phil Kirkham also highlighted that the application was dependent on a 3rd party API (Twitter) that might change.
  • Richard Hill uncovered another 3rd party dependency when he asked the format that the reports functionality would be in.
  • Tony Bruce decided the project would be a risk to his career and thought it would be best to hand in an early resignation!
  • Tony Bruce also picked up on the misc requirements, which would just fit in somewhere.

Improvements

  • Adam Brown suggested a scheduled tweets feature so that you can still target overseas customers whilst your agents are in bed.
  • Brian Osman suggested some points of manual interaction within the feature that could be done automatically.

Competition

  • Sharath Byregowda was the first to investigate if other products did what was planned already, or similar/better things.
    • Brian Osman followed up on Sharath’s investigation by asking a key question “What makes this different?”

Meetings

  • Ben Klaasen, Lisa Crispin, Mike Scott and Mohinder Khosla all managed to get some additional information from me during a meeting to help them understand the proposed feature a little more.
  • Adam Yuret and Anand Ramdeo both did the smart move of asking for feedback, on their feedback.  This is good because it lets you know if you’re providing the correct type of information.
  • Mike Scott managed to get an example of how the campaign engine would work in practice, having honed in on two risks:
    • The requirements were too large to provide feedback within the one hour session.
    • Campaigns were the biggest risk to the project.

Phew!  Quite a lot don’t you think, and that’s only the highlights of the first hour.

Session reports

So the time came to produce our session reports.  I’d already noticed a trend happening.  You had the testers who were in the zone and had ignored, or only glanced at the chat in the past hour, focusing solely on their feedback report.  Then you had those who were keen to interact with the stakeholder (me), and the group to collaboratively gather and expose additional information.

I don’t think either group was right or wrong, I do think though that a blend of zoning in and collaborating with others and your stakeholder is needed.  If time was available I’m sure most would have met somewhere in the middle in that respect.

Anand Ramdeo’s mind map

Mind maps were popular once again, as with most of these testing sessions.  An excellent one came from Anand Ramdeo who’d spent the past hour in the zone.

Anand's mind map. Click to view

Images used within this article can be viewed more easily by clicking on them to expand their size.  To view in their original size in more detail, right click on the image and select “Open in new tab”.

Anand highlighted some key points; some of these being:

  • Initial risks to the project
    • Skilled resource being available to work with Twitter API’s.
    • Non functional requirements such as accessibility, volume.
  • Risk with proposed features
    • Agent abuse, with a suggestion for supervisor monitoring of accounts.
    • Could auto following people cause image concerns for your profile e.g. following spam accounts, or the wrong type of user.
  • Gaps
    • How would the system handle users that un-followed you.

I also liked how he considered feature claims.  Claims testing is essential and he raised some good points from identifying and questioning feature claims.

Lisa Crispin and Mohinder Khosla’s mind map

Lisa Crispin and Mohinder Khosla paired and produced another great mind map.

Lisa and Mohinder's mind map. Click to view

They picked up on some key risks as well.  A quick highlight of these being:

Initial risks to the project

  • What is this problem actually solving?
    • This is the first question I’d always ask.  Nicely done :-)
  • Is this achievable?
    • Highlighting the need for collaboration with developers.
  • Non-functional requirements such as performance and security.

Risk with proposed features

  • Questioning the value certain features actually bring to the product.

Suggestions for improvement

  • Personas
  • Solid usage examples

Rakesh Reddy’s mind map

Rakesh Reddy also highlighted some key concerns in his mind map.

Rakesh's mind map. Click to view

Initial risks to the project:

  • Inconsistent domain language possibly leading to invalid design
  • The risks that a twitter API might auto block our account as spam bots.
  • 3rd party limitations from Twitter conflicting with proposed feature

The second one really stood out for me!  The fact that the limitations of our 3rd party dependency (Twitter) might ground this project from the start is probably one of the biggest risks possible.  Nice one Rakesh!

Highlights from the debrief and others reports

Mind maps are easy for me to highlight here, but there was also some other excellent feedback presented from the session reports and discussions that followed.  Here are some quick highlights of these and the debrief chat:

  • Sharath Byregowda highlighted that by spamming followers the company image is under risk.
  • Del Dewar also picked up on a fundamental flaw (why would a user follow you back) with the feature.   Highlighting that for bigger companies this may not be a problem but for smaller, unknown entities, where is the inclination?
  • Richard Hill probed about potential data migration issues, should a company switch over to another account.
  • A bunch of people began highlighting traps :-)
  • Brian Osman stated “I kept asking myself “what was it not saying?””
  • Most agreed that mind maps would help provide better feedback.
    • Some spoke about the benefits of collaborative mind mapping.
  • Some highlighted having dual screens would have improved their feedback.

Testing Traps

I previously said the document was littered with testing traps!  It’s true.  Let’s now look at those traps, and who knows perhaps there are some other traps that I didn’t intentional add that you’d like to highlight yourself?

Scope and collaboration

The mission itself was vast!  Far too much to allow in depth analysis and feedback within an hour; although some did manage to do this pretty well.  There was a general agreement that the scope of the mission was too hard to achieve within the one hour session, and that, indeed, was an initial trap.  It was interesting to see how people approached this and some avoided this trap by focusing on a specific section of the document.  Better still some focused on what they felt was the key risk/s to the project.  Some others worked their way through the entire document, and although most of them still managed to produce good feedback, they could have had focused their efforts better by avoiding this information overload trap.

No one took a collaborative group approach to the mission.   Had someone stepped forward and gave the group roles to play in the investigation and feedback stages, then things would have went much smoother.  Coverage would have been greater, and the mission could have been achieved, bypassing this initial trap of being overwhelmed by the scope.

Terminology and vague requirements

The requirement document itself, although quite big, had very vague requirements listed.  It used certain terminology which assumed people had an understanding of the domain.  Certainly it wasn’t fit for review by a stakeholder.

The document highlighted on the fact that it would be formed of use cases, then didn’t go on to produce any.  The document couldn’t provide example scenarios of usage, or indeed how this would solve a problem for the customer.  In fact it couldn’t justify that this was a problem which needed to be solved.  A bunch of people picked up on this which was good.

I smell uncertainty!

“Someone’s vision”, “we just need to determine”, “an intelligent mechanism”, “potential market”, “preferred”, “reports of some kind” and “there must be a way to determine” are all extracts from this document which spell u-n-c-e-r-t-a-i-n-t-y!  Yup, in other words “I don’t really know how this will work, but it should.  Just do it!”

Those were all traps and keys risks at that!  Being able to pick up on uncertainty is certainly an essential testing skill.  Thankfully most of these got picked up in the feedback people provided.

Oh I’ve got an idea!

High D&I!  If you’re familiar with the DISC profile you’ll know high D&I people are usually management/sales types.  People who can throw their weight about and pressure others into doing stuff for them.  The fictional sales manager in this requirement Brian Gibb, somehow managed to get a whole new feature (his vision) introduced just from a quick sales pitch to the CEO and some others.  Terrible isn’t it?  We’ll it’s not uncommon, I’m sure many of you have seen a vision being produced and quickly failing to be on time, deliver value, or meet initial expectations!  Brian’s vision was indeed another trap and key risk.

Additional Information

People could book a meeting with me to potentially help them with their mission.  A chance to ask me questions and possibly get some snippets others wouldn’t see.  Well some of you did book a meeting, and some of you did get one of the three available snippets of information.  Of course this was another trap, in that by failing to book a meeting and ask the right questions you’d potentially lose out on valuable insights.

The three snippets of information, which included a very handy diagram I’d drawn up to help me understand the requirements, are as follows:

“I drew up a diagram to help me get a grasp of what we’d be building.  You can have a look if you like?  It might be helpful, you can find it here:

Darren's requirement high level mock up. Click to view

Some information about competing products we’d considered using to provide this functionality for free and why we’d decided not to use them.

“We realised that there is a few competing products out there which already do similar things, such as Radian 6.  We did discuss if it would be better to integrate our product with them as opposed to investing the development effort ourselves.  However our CEO Patricia was certain we should do it this way so that we could customise as much as we’d liked in future.”

Some information about Brian Gibb and his last vision!

“Brian Gibb (Head of Sales) intervened on a project a few years back.  It wasn’t quite this early in the project life cycle, but it was shortly after build had begun.  He gave his vision and stated how it didn’t match the current scope of the project.  He got his way and the project then did get finished with his proposed vision.  The project overran by about four months and his vision changed a few times in between.  Although we did make a few sales, most customers weren’t happy with the usability of the product and we spent a later release cycle making changes to fit our client’s needs.

To say I’m a little nervous about another vision from Brian would be putting it lightly.”

Understanding the importance of a requirement

Another minor trap was not knowing the value these requirements had to the business.  At such an early stage, scope is likely to change.  Requirements will be de-scoped and new ones will come in.  Sure, the requirements had a MOSCOW prioritisation section, but only a few actually had any priority placed upon them.  Knowing their importance would have allowed us to focus on key requirements first, as we would know they had a lesser chance of being dropped.

Other write-up’s

So that wraps up a enjoyable little session that I had fun facilitating.  Adam Brown and Brian Osman wrote up their own write ups as well, both are well worth a read.  Adam Yuret got rushed home mid session for a family emergency and then later had another emergency when his wife went into labor :-)   Myself, I went on to repeat this challenge with my team.  Two of them have still to take this challenge so I’ll leave writing that up till a later date.

I’m hoping to see some more new faces at the next Week Night Testing session, and thanks once again to everyone who attended this one.

Thanks for reading.

Related posts:

  1. Week Night Testing, oh yes it’s begun!
  2. Week Night Testing #2: Testing models & mind mapping