“Moose!!!” yelled the friend sitting on the passenger side. “Where?!” I shouted back but didn’t hit the brake.
Of course, my first impulse was that I wanted to see the animal with my own eyes. We missed that considerable creature by few centimeters.
Dozens of people die by hitting a moose here in Finland annually. We were lucky not to end up extending those statistics that day.
After the incident, my instinct changed for good. Now when my copilot says but a thing, I tend to react first and only then gather the evidence.
There are two kinds of bugs in software development. The ones that we see every time, and the ones that can play hide and seek for months.
The latter ones are annoying because everybody want’s to see them first before hitting the breaks.
I wanna see the moose! I wanna see it first!
Because it is so hard to believe just the words of our frustrated customers or occasional notions of a tester, these bugs often linger for long periods of time before someone gets around to actually tracking them down for fixing.
Bug reports get tossed back and forth between product development and testing.
“Could not reproduce, please retest on build 5” says the developer. “Still happens on 1 out of 10 tries” continues the tester. And the roulette keeps on revolving.
These arguments waste our most precious resource, time. We tend to argue instead of investigating the real reason. This problem escalates in multi-site projects, where the development is done in two time zones and by different teams altogether. Arguing can even take weeks without actual solutions.
A tester often has a heavy burden of proving that this bug is real but only playing hide and seek.
Screenshots and videos are required to build proof. And even then there is a consideration if it is a bug or a feature.
Ultimately, figuring out bugs this way tends to escalate upwards in the organization, and the costs start stacking.
Situations such as these derive from the way how testing and development experience a rivalry. Or teams or trains each compete towards their own goals. The product has become secondary in the minds of teammates.
What a tester can do to avoid the burden of proof is simple. But at the same time, it is hard to do because it requires a load of skill in influence and willingness to ask for help.
Testing professionals should learn to plan and ask for features like event logging, crash dumps, and debugging tools. Those are testability features.
It is only human to believe what we see and nothing else. There is no point in fighting the nature.
Testability means developing and deploying tools to demonstrate where the moose hide.
Nobody will do testability for you, so learn to plan and ask for it.