Challenge: Testing the Future


This post is going to diverge slightly from the norm in that it is not being supplied by the esteemed Darren but instead, from me, Stuart MacDonald, his additionally esteemed colleague.  A little background about myself first is in order though.

Graduating  in 2009 from Strathclyde University (Glasgow) with a degree in Computer & Electronic Systems (a hybrid degree of Computer Science and Electronic Engineering), I started my current position as Software Tester in October 2009 and have been gaining as much experience as I can in the various aspects of software testing within a real-world development scenario.

But enough of the non-interesting facts, let’s get onto the fun stuff :)

Following on from the first challenge that our team were tasked with undertaking, I submitted myself as the candidate to put together the next one.  Conclusions drawn from the first challenge showed that the interesting findings weren’t from the actual bugs which each participant discovered but rather, the approaches that they used in order to find them in the first place. For this reason, I decided that this next challenge should take a step back from the actual finding of issues and instead focus on how we approach testing at a high level.

After doing some research and looking for inspiration, I came across an excellent post by Lanette Creamer about a hypothetical scenario which invited readers to ask any relevant questions they felt they should ask her in order to proceed with testing of a system.  Reading this and the responses that Lanette gave to each of the questions showed an excellent example of how people will respond to a task given only a high level view of the overall system.

Using this example I came up with a scenario that I felt people could respond well to and would satisfy my aims of getting people to think about the processes and techniques that they utilise when approaching a task for the first time and the areas of knowledge that they feel is necessary for a tester to know before being able to evaluate quality.

So, armed with a bit of background information about why this particular challenge was devised, it’s time to move onto the scenario itself!

However before we do that I’d like to point out that Darren called for participants from a far & quite a lot of people put themselves forward to participate in this challenge.  Only two however came back with their results, one being the inspiring Albert Gareev who came up with the idea for our last challenge.  The other being Tony Bruce, who himself blogged about his response to the challenge, a very good response at that ;-)  Thanks again Albert & Tony for participating, hopefully more people will join in next time.

The Challenge!

Testing the Future

Scenario :

Your good friend Dr. MacDonald has been developing a time-travelling car which he aims to use to allow a person to travel backwards or forwards in time.  He states that he has tested each of the components separately and has tested the system as a whole by using two synchronised clocks (one of which is sent 1 minute into the future with the car. When the car reappeared, there was exactly 1 minute of difference between the clocks).

He states that he is now happy to use the system and wishes to be the first person to travel in time.

Task :

Dr. MacDonald has hired you at this late stage in development to consult about the level of testing he has carried out so far and if he needs to do any more.

So, the task is simply this.  What questions would you ask Dr. MacDonald about the system and its testing?

Time :

Please take no longer than 30 minutes to come up with as many questions as you can.

Results Summary

I was initially going to just provide a summary of the main points and trends that were evident from observing each of the responses from the participants however, that doesn’t really provide a view of the full picture of the differing techniques used.  So for anyone interested in viewing the full collection of responses, they can be found here.

In order to make the summarized points more readable, I have grouped them into a number of themes to cover the relevant areas.


  • 4 people asked for the aims and acceptance criteria of the system
  • 3 people asked if the documentation/system has been peer reviewed
  • 2 people asked who the stakeholders are


  • 6 people asked if the machine had any limits on distance of time travel
  • 3 people requested documentation on each of the components
  • 1 person asked if there were any reporting systems

Testing Process:

  • 6 people provided examples of further testing that should be considered
  • 5 people asked if the physical condition of the machine / objects had been examined after testing
  • 4 people requested details of completed component testing
  • 3 people asked if integration testing had been performed
  • 1 person asked if user acceptance tests had been considered
  • 1 person asked if there was any risk assessment of components
  • 1 person asked for an indication of test coverage


  • 7 people asked if there were any fail-safe measures in place for system failures
  • 2 people asked if there were any security measures in place to prevent misuse


  • 2 people asked if any training or user guides were required in order to use the system
  • 1 person asked how the machine would handle user errors


  • 1 person provided a full test plan for load testing the machine
  • 1 person provided a mind-map to identify key areas and provide questions for each
  • 1 person mentioned using the V-Model for identifying areas of testing required
  • 1 person provided information on who had supplied the initial specification


  • 6 people asked about any possible environmental issues caused by the machine (including the effects on time itself :p)
  • 3 people specifically questioned the ethics of using time-travel
  • 1 person gave their recommendation on whether the machine is fit for use based on current knowledge
  • 1 person questioned the legality of the proposed system

There are definitely a lot of interesting points to take away from these results and for me, they also showed a great deal of information about the particular mindsets of various participants.  Various common questions were put forward, such as boundary limits for how far the machine can travel, details of components and documentation of any testing already carried out.  Some of the more interesting notes included a full test plan for a system load test (provided by our resident load tester :p), and a reply in the form of a mindmap supplied by a certain tester who has been exploring lean test case design recently ;)


Differing from the previous challenge, it was decided that all responses would be made visible to the participants to allow people to analyse the results for themselves and draw their own conclusions about how they had performed and what lessons they had learned from the experience.  In order to focus this discussion towards the initial aims, I asked each participant to answer the following retrospective questions:

  1. After looking at all of the responses, would you change your approach if doing the task again?
  2. Do you think your approach to the task was different in comparison to the first challenge?
  3. What interesting points did you notice from people’s responses?

The responses to these questions are, to me, what the whole point of the exercise was.  It’s natural for different people to be operating under different mindsets given their experience or knowledge within certain areas.  What’s important here are the lessons that people learned after viewing all of the responses and the key points that were taken away from each of the techniques that people employed.

Personally, one of the aspects I was interested in seeing was what assumptions people made when analysing the scenario and how various people challenged these.  Some very interesting questions arose from this such as questioning if the Doctor can be trusted, if anyone else has seen the machine in operation, even the validity of using a car in the first place as the mode of transport.  I like these responses a great deal since they emphasise the point that in the field of testing, it’s imperative that we not only ensure that requirements are being met but that we also analyse the requirements themselves for validity to ensure that they truly match up with whatever need is to be fulfilled.

The feedback from the other participants also made for a very interesting read. Besides noting various areas of questioning that people noticed they had missed, there was a large majority who also stated if they had used some of the more focussed techniques which others had implemented (such as mind-mapping), the these areas probably wouldn’t have been overlooked.

Another interesting note was that, putting aside the differences between this challenge and the more “hands-on” Escape challenge, a lot of people stated that they had changed the way that they had approached the task with most stating that they had either broadened the aspects of testing that they were considering or had taken a more structured approach to the task itself.  Being the awkward person that I am, I will admit that I actually did the opposite of this as the first challenge saw me create a structured set of test cases which were very detailed but inevitably meant that I had run out of time before I felt I had reached a good level of coverage.  On this challenge, I decided that I would simply write down the questions that came to mind without any planned structure or processes in place. The result was that I actually found myself inadvertently creating a structure as I went along by identifying an aspect, identifying questions related to it, and then moving onto the next.  In essence, I had created a mind-map without realising it.

All in all, this turned out to be a very enjoyable and popular challenge that everyone seemed to have taken something useful away from.  The next stage is now to see how people employ these lessons into their daily tasks at various levels of the testing process.

Now bring on the next challenge! :)

Related posts:

  1. Challenge: Escape! Games+testing = fun
  2. The usability challenge!
  3. The Impossible Challenge?