Over the last 5 years I’ve had the opportunity to integrate and work with about a dozen different companies – some big, some small, every one of them different from the others.
One interesting approach that I have experienced over the last few months by one of those businesses is to take a scientific approach to business decisions. A feature change is driven to support some business or product goal, but there isn’t necessarily the assumption that a proposed change is guaranteed to address the goal. It’s a Hypothesis. In an effort to test the hypothesis tools are needed to measure and assess the goal and an experiment needs to be devised to test it.
In practice, this can mean many different things. It might be that everyday someone has to collect numbers to update a spreadsheet, or that code is instrumented with extra logging or performance measurements, or that someone needs to create a report from old data. A quick prototype may be used to test a concept, or a bigger task may be broken down to a find a smaller initial experiment.
Framing any proposed solution as a hypothesis has some interesting effects.
- hard to measure goals may get lower priority
- There is less stigma on wasted effort for a hypothesis that doesn’t result in attaining the goal – experiments are expected to fail
- It’s easier to think of taking a bigger task and find smaller experiments to prove the hypothesis before expanding scope
- metrics added to the product or business are additive. over time creating more insight that hopefully drive better future decisions
- Removing the assumption that an idea right lowers the inhibitions for everyone to contribute their ideas with less judgement
- Even when the CEO suggests something, it’s still an idea that needs to be proven with an experiment
There are trade-offs.
- It takes extra time to record metrics and review the results
- Things that are difficult to measure may get under valued – ex. developer happiness, code cleanliness, velocity
- Experimental prototypes might stay in production for a long time
- overtime, adhoc data collection can result in extra complexity – things stored in different places by different people to answer different questions.
- small failed experiments part of a bigger hypothesis may result in half-implementations that were not worth finishing.
Overall I think it’s an interesting way to drive the decision making processes within a business and something I will apply on future projects.