Scaling Biotech: A Framework

Photo by Ashkan Forouzani on Unsplash

Scaling a biotech research platform requires a data platform that enables a wide range of project and functional teams to efficiently and effectively coordinate and share data.

Over the last few months, I’ve been writing about different aspects of this, circling around a mental model for making decisions about how to design a data platform that does this effectively for the unique idiosyncrasies of your particular organization.

I think I now have a clear enough picture of this model to write down a v1.0 description of it.

So that’s what this post is.

Abstract

First I’ll briefly summarize the general shape of the framework. If this outline sounds interesting, read the rest of the post and sign up for my (free) weekly newsletter where I’ll explore these ideas in more depth.

A biotech platform is a collection of systems and processes that allow an organization to apply a core biological hypothesis to a variety of diseases, defining individual drug programs based on narrower hypotheses that are applications of the core one.

The core activities of the platform are experiments, formal or informal, creating data to be turned into information/knowledge/insights and shared across teams, functions and programs.

To do this effectively as the number of teams, functions and programs increase, the organization’s data platform must create a balance for each type of experiment across three trade-offs:

  • Cost vs Reliability
  • Immediacy vs Generality
  • Flexibility vs Consistency

Every design decision for the data platform pushes a type of experiment in some direction along these trade-offs. So by choosing which way you want to push each type of experiment, you can make better design decisions about the platform as a whole.

In the rest of this post, I’ll explain each of these statements in more detail.

Definitions

To make sure we’re all on the same page, let’s start with some definitions:

A biotech research program/platform is a collection of people, projects and ideas organized around a core biological hypothesis that can be applied in multiple contexts to identify treatments for particular diseases.

A very relevant example today is Moderna, which built a platform around the hypothesis that messenger RNA can be delivered in the form of a drug, the most notable being a COVID-19 vaccine.

CRISPR Therapeutics’ platform is based on the core hypothesis that you can use gene editing to cure diseases.

Vertex was founded on a research platform with the core hypothesis that you could geometrically design small molecules that fit into the physical structures of target proteins identified by x-ray crystallography.

The list goes on.

And it’s growing as biology research increasingly shifts towards broadly-scoped, data-enabled research methodologies.

A drug program is a single application of the platform – A team/project/etc. whose goal is to develop a narrower hypothesis that applies the platform’s core hypothesis to a particular disease.

Moderna’s work to create a COVID-19 vaccine was a drug program that narrowed their broad “use mRNA to treat disease” down to “use mRNA to create a COVID-19 vaccine”.

A data platform is the collective whole of the software, processes and organizational structures that enable an organization to collect, manage and leverage data.

Every organization has a data platform. Only some organizations are aware of it.

And usually it’s a chimera – a collection of different systems that were built separately but shoehorned together.

A research organization will typically (ideally) have a single data platform for all the drug programs across its research platform.

A large pharma company may have multiple research groups/organizations which may each have their own data platform.

An experiment is any activity where you don’t know the outcome before you start.

Some experiments are formal activities in a lab context, such as testing whether a particular molecule binds to a particular target.

Others, you might not recognize as experiments, such as running your unit tests to see if your code works.

A biotech platform requires roughly two classes of experiments: Those that drive a single drug program forward and those that help develop or improve the platform as a whole.

The outcome of an experiment is not a physical product.

It’s data that can be turned into information/knowledge/insight/etc.

And it’s the data platform that makes this happen.

What is scale?

There are at least two different meanings of the word “scale” in this context.

And chances are the one you’re thinking of isn’t the one I’m writing about.

The most commonly understood meaning of scale when it comes to data is the original 3 Vs of Big Data: Volume, Velocity, Variety.

When Doug Laney introduced this idea in 2001, computer hardware and software was struggling to keep up with the three Vs of the data being generated.

(More “V”s were added later, but that’s another matter.)

In the two decades since then, heroic efforts and massive investments have gone into addressing these problems.

Hardware got faster. Software got smarter. Cloud computing commoditized it all.

Today, there are still a few cases where the 3 “V”s are a problem.

But not many.

“Scaling Biotech” is about a different kind of scale: organizational complexity, coordination across functions, communication across disciplines.

How do you make sure that your research program is more than the sum of its parts, as the number of parts grows?

How do you make data travel not just across a network but from one context to another?

Addressing this type of scale requires organizational as well as technical advancements, and these are much harder to address with generalized, commodity tools.

The Experiment Factory

The basic building blocks of a research organization are experiments, but experiments are not independent.

In many cases, you’ll want to run multiple experiments that are mostly the same, with a few “parameters” changed.

  • Run a particular assay with different concentrations of different compounds, but fix everything else.
  • Try small variations of a particular lab process to see what yields the most consistent readings.
  • Tweak the hand-off process for a particular step in your pipeline until it’s working smoothly.
  • Run a suite of unit tests with each new version of your code until your tests pass.

Each of these is a series of experiments that fit into a common template but with varying parameters.

It doesn’t make sense to spend too much time optimizing a single experiment because once it’s done it’s done.

But if you’re going to be running lots of experiments within a single template, then optimizing that template is a much more reasonable investment.

This is the experiment factory.

Trade-offs

From the perspective of the experiment factory, scaling a biotech research program means optimizing experiment templates in order to enable organizational scale: more drug programs, more functional and project teams, more experiments and more experiment templates.

But how do you do that?

In general, if you want to optimize something, you need to know three things about it:

  1. What state is it in now?
  2. What state do we want it to be in?
  3. How do we get it into that state?

The idea behind the Scaling Biotech Framework (v1.0) is that you can answer these questions by looking at three trade-offs, which we’ll discuss in the following sections of this post.

The trade-offs are partially correlated, but different enough that they’re worth considering separately.

For each experiment template you can answer the first two questions in terms of where the template is/should be along these three trade-offs.

From there, the goal is to identify concrete design decisions that will push it in the appropriate direction to answer the last question.

The next three sections discuss these three trade-offs with examples of design decisions that can push an experiment template in one way or the other.

Cost vs. Reliability

This one should be a no-brainer, right? Every experiment should be as inexpensive as possible and as reliable as possible.

But it usually isn’t that simple.

The term “cost” here should be interpreted more generally than just dollars: Experiments also have cost in terms off time, cognitive load, personal energy (spoons), frustration, etc.

Many of the things that make an experiment more reliable also make it more expensive:

  • Adding an approval process increases the chances that the experiment will successfully collect the right data. But it also increases time and frustration.
  • Requiring a theoretical justification for an experiment increases the likelihood that the data will be relevant. But it also increases cognitive load and personal energy.
  • Implementing clear intermediate metrics and monitoring ensures that long-running experiments can be adjusted or terminated if they start to go off track. But also increases time, cognitive load, personal energy and maybe even dollars.

Each of these is a design decision that pushes an experiment template towards the reliability end of the trade-off.

Other design decisions that reduce costs may make experiments less reliable:

  • Automating the process of starting an experiment, e.g. with a button on a web page, reduces the time and energy cost. But it also increases the risk that someone will run a duplicate experiment or forget something that makes the experiment invalid.
  • Inferring/auto-filling experiment parameters from existing data sources reduces the time and cognitive load cost of an experiment. But it also increases the risk that someone won’t notice the inferred parameters were incorrect.

When experiments are inexpensive enough, rerunning them may be less expensive than the cost required to make them more reliable.

But for expensive experiments, the cost of rerunning them will be larger than the potential savings from design decisions that reduce their reliability.

So it usually makes sense to choose design decisions that make expensive experiments more reliable, and inexpensive experiments less expensive.

I call this the Experiment Cost Inflection Point.

For each experiment template, the key is to assess where it currently lives on this trade-off, where you want it to live, and what design decisions will get it there.

Immediacy vs Generality

This trade-off is based on an idea introduced by Sabina Leonelli that data needs to travel not only between systems but between contexts.

Whether it’s a readout from a lab instrument, a manually captured observation or something else, the person who collects the data will have a particular understanding of what they’re collecting it for, and thus how they should collect it.

This is their context.

The next person who uses the data will have a different context with different ideas about how it should’ve been collected.

  • Maybe they want information that wasn’t collected, such as the room temperature when the sample was prepared.
  • Maybe they care about what lab the mouse was source from, not just its lineage.
  • Maybe they want a table of data rather than free-form notes.

The person who collects the data may or may not know what contexts the data will be used in.

The person who develops the template may not know either.

So the trade-off is: Do you optimize data collection for the context in which the data is collected or the context in which it will or may be used?

Again, there are concrete design decisions that push it towards one end or the other:

  • Collecting data in free text allows the user to tailor data collection to the immediate circumstances of the experiment. But requiring structured fields allows later users to quickly transform the data into their own context.
  • Tagging samples with values from an established ontology allows later users to apply it in their own context with minimal translation. But custom tags allow data collection to be specific when the established ontology doesn’t quite fit.
  • Requiring that all data be stored in a database ensures that future users will be able to find and access it. But for the data collector, creating a spreadsheet is faster and allows them to adjust the format as they go.

For each of these examples, you’ve probably encountered cases where one side of the trade-off is the clear winner and others where the opposite makes more sense, or maybe something in between.

Again, the key is to assess each experiment template against this trade-off and decide what design decisions will push it towards where you want it to live.

Flexibility vs Consistency

This is the one with the clearest conflict between the two ends of the trade-off: If you make a process flexible, users will take advantage of that flexibility and the results will be inconsistent.

It’s also the one with the clearest natural evolution: Early on in the development of a research platform, your hypotheses will be broader, so you’ll want more flexibility to explore.

As the platform develops and you narrow down those hypotheses, the value of having consistent processes and thus consistent data will outweigh the need for flexibility.

So platforms will tend to evolve from flexibility to consistency.

The question, though, is what design decisions will either support this evolution or allow your organization to re-introduce flexibility if you’ve moved too fast towards consistency before allowing time to explore.

  • Capturing structured data rather than free-form text/notes enforces consistency in how data is structured. Including free-text fields for some of the data dials it back towards flexibility.
  • Automating processes and workflows enforces consistency by making sure the process happens the same way every time. Allowing custom runs with parameters or manual intervention adds back in some flexibility.
  • Enforcing strictly defined Standard Operating Procedures (SOPs) and approval processes increases consistency. Carving out exceptions for “development” activities allows for flexibility.

Consistency tends to be correlated with generality (as opposed to immediacy) since consistent data allows someone in a different context to analyze all the data together.

But you can also increase consistency in ways that promote immediacy, such as adopting a fixed vocabulary that is custom to the immediate context rather than externally defined.

Based on these examples, it’s also clear that increasing consistency may correlate with reducing cost (automation) or increasing reliability (SOPs and approval).

Conclusion

The Experiment Factory model gives you a framework for thinking about design decisions that you might not have considered, and weighing options that seem equally promising outside the context of a particular experiment.

Over the next few months on this blog, I’ll dive deeper into different areas of data platform design from this perspective. (These will probably be long-form posts like this one rather than the shorter posts I’ve done the last few months.)

And I’ll continue exploring short/quick ideas on my (free) weekly newsletter. If you read this far and want to learn more, sign up and give it a try!

One thought on “Scaling Biotech: A Framework

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: