Broken Windows
As a test manager I have observed the principles of Broken windows theory in software, and struggle each day against it. The basic principle is that a window left broken in an old building soon results in more broken windows. It spreads to other buildings. Soon a whole estate is degrading, it’s not just broken windows and graffiti, but robberies and worse.
A couple of years ago I’d say our software was in the situation where nearly all the windows were broken, bugs were so easy to find. As time progressed the bugs got worse, session crashes, performance issues.
Our company had focused on rapid content build, making a solution that was easier to sell. It was the correct move at the time, it brought in sales!
After a while we noticed the hidden costs, content wasn’t used because it wasn’t of great quality, looked poor, was constantly suffering issues. Our culture had changed. Where there was once one broken window, soon there were many. Where once a single screen didn’t have a great layout, soon many had poor layout, soon there were many bugs, session crashes and performance issues. With many broken windows, people didn’t think it mattered any more, what’s one more broken window amongst a building of them.
Today it’s a different story for our product. Achieved through a lot of hard work, a change in expectations, not accepting less.
Change
In New York 15 years ago serious crime was a major problem. Today serious crime in New York is low. This wasn’t achieved overnight, a lot of work went into making it happen. Firstly came a change in attitude, police no longer tolerated petty crimes. A zero tolerance policy. The idea being that people changed their views of what was acceptable. This was accompanied by a large rise in police officers on the beat, which required the hiring of many new policemen, a serious investment in solving the problem. The officers weren’t just there to clamp down on the petty crimes. They got to know the people, build relationships, change attitudes. Stop catching and start preventing.
To change our software we had to change expectations, and we had to invest in that change.
Expectations
We worked on setting the expectations clearly. The product must work on specific resolutions on specific browsers. It must look good and adhere to design guidelines. Constant communication helped reinforce the expectations.
Acceptance
We also changed what we accepted, there was no excuse for it not working at the specified resolution on the specified browser, we had clearly set the expectation. Zero Tolerance.
“if you don’t have IE6, get access to IE6!”
Investment
It took a release cycle of re-working a large amount of code, the business agreed to that investment, and the results of our efforts are astounding. It amazes me each day the higher standard that our developers produce. It looks great and it works. Testers encounter far fewer defects when they first see the software, developers have already done their due diligence and often raise issues before we can. It was a fine line between being the enforcers and being members of the team trying to make a better product, changing attitudes. These days the tester isn’t seen negatively for raising issues, but their opinion is valued on making the best product.
Preventing future broken windows.
Testers have more time to look further afield now, raise expectations in other areas, make the software more usable, make suggestions for enhancements to the user experience, raise performance expectations. There are more broken windows out there, in other aspects of the software, we’re working hard to mend them.
No related posts.
Good post Michael!
I think this area is very important. Consider the combination of broken windows with what I call a testing debt.
What if we were able to talk about this more openly to our peers?
BR,
Martin
Thanks Martin.
That’s quite a thought provoking comment. Worthy of a dissertation in answer.
In short though I would say that there are a couple of things that can be done to help manage the testing debt. And avoid broken windows.
I’m keen on testing early, testing often and testing as close to reality as possible. But as functionality grows, it’s not possible to test everything and testing debt creeps in. Choosing what tests to run, managing that debt whilst being a bug advocate, or flagging failing automation to stop the broken windows is what makes us testers, not just robots.
I’m sure i’ll post again about these skills.
One thing I see often is that we as testers adapt to our situation and plan accordingly. At this stage we take shortcuts based on our new situation. I rarely see us communicate back to our stakeholders what these shortcuts mean.
When I coach I tell the test lead to plan according what she and her group needs to do to meet a specific the information objectives and quality level. Lack of resources is a state that could change.
I think this area can be delved a lot deeper into. Weinberg has written a lot in this area and it might need to be resurfaced.
Having just been through a release I would agree that we don’t communicate the impacts. I would also argue that it’s difficult to do in a meaningful way. I’ve spent a week consulting and fine tuning a test summary. Each stakeholder wants something different, and none want too much information. If i’m finding it difficult to pitch enough but not too much information i’m sure the testers have the same trouble.
I’m finding that time spent with stakeholders is always beneficial. challenging and always worthwhile.
I’d be interested in any reading recommendations you have.