Scaling a Biotech research program means managing and optimizing sequences of closely interrelated experiments, i.e. the Experiment Factory.
A lot has been written about the value of creating short, inexpensive experiments to explore an idea. But is there ever value in making an experiment slower and more expensive? (My answer may surprise you.)
When I first started working on the data infrastructure for an R&D biotech startup, I immediately recognized a bottleneck caused by our in-vitro validation process.
If we could get those experiments to be fast and cheap we could not only explore more hypotheses but also feed more data back to improve the ML models driving the exploration.
But there was one big problem: the incubation step requiring multiple unavoidable weeks.
So better software might been able to shave off a few hours or even a couple days from the beginning and end of the experiment. But that wouldn’t make a fundamental difference.
In some contexts, making a particular type of experiment fast and cheap enough can cause a fundamental change in the way we relate to it.
Testing a computer program is a good example.
(I’m using a broad definition of an experiment as any activity that you don’t know the outcome before you start.)
In the days when computers took up a whole room, a programmer might get one chance every few days to run their program. So leading up to that run, they would spend hours proofreading their code to find any bug that might ruin their next opportunity.
Today, coders can run their program immediately. It’s faster to run it and see where it breaks, so why bother proofreading?
That’s a fundamental change.
Once we begin to realize the benefits of an inexpensive experiment, we tend to invest in making it even faster and cheaper.
In software development, practices like Test Driven Development involve doing more work up front to make the test run automatically, so an experimental run doesn’t even cost mental energy.
For an expensive experiment, on the other hand, the potential savings from making the experiment cheaper is often much less than the potential additional cost of having to do it over again.
So instead of investing in making the experiment cheaper, we tend to invest in making it more reliable: Doing extra research to validate the hypothesis, double checking that things are set up correctly, monitoring the experiment throughout its progression.
Many of these activities involve creating smaller, cheaper experiments before during and after the main experiment.
These smaller experiments often use proxy measures that don’t directly measure the outcome of the big experiment, but give you a rough idea of what it’s likely to be.
In fact, you can think of the entire drug development pipeline as a series of proxy experiments – large scale screening, in-vitro experiment, in-vivo experiment, Phase 1 trial, Phase 2 trial – that build up to the one measure that actually counts: the outcome of a Phase 3 clinical trial.
These activities increase the chances that each next experiment will be successful, but they make the end-to-end experiment more expensive.
We don’t do these things for cheap experiments because rerunning a cheap experiment when we get it wrong is cheaper than making sure it was right the first time.
This is the experiment cost inflection point: We tend to try and make inexpensive experiments cheaper, but make expensive experiments more expensive.
Over the inflection point
And usually this makes sense.
Where it may not make sense is for an expensive type of experiment that, with the right kind of investment could be pushed across the inflection point to the inexpensive side.
This happened with software testing because of the investments driving Moore’s law. But it took decades and billions of dollars to get there.
If an expensive type of experiment is close enough to the inflection point, then making that investment can have a huge payoff by turning it into an inexpensive experiment.
But if it’s too far from the inflection point or there’s a fundamental reason it can’t cross it (such as a long biological incubation period) then trying to make the experiment cheaper could actually backfire by making it less reliable.
So the key to optimizing a research program, from the perspective of an experiment factory, is to understand which side of the inflection point each type of repeated experiment is on, and invest accordingly.
And on the rare occasion when you recognize that an expensive type of experiment is close to that inflection point, you can choose to push against the tendency to make it more expensive, to bring it to the other side.
What this looks like
Data management systems can push experiments in either direction, depending on design decisions.
Default values auto-complete in forms make experiments less expensive by reducing cognitive load, but can introduce errors that make them less reliable.
Forcing deliberate choices that require review/approval, costs time and cognitive expense but makes the experiment more reliable.
Both approaches increase the amount of data you collect.
Automation that removes human intervention reduces the time and mental energy costs, but makes it harder to catch errors or intervene to address them.
Automation that directs and channels human intervention can make it easier to identify and correct errors, but increases cognitive load.
Alerts may make experiments faster by reducing the time between manual steps, or make them more cognitively expensive by forcing you to pay attention to more of the details.
Each design decision for your data platform will push the processes it involves one way or the other along the spectrum from inexpensive to reliable. By deliberately considering what direction you want to push each process, and how your decisions support this, you can not only make better decisions but also communicate and motivate these decisions for stakeholders.