The Experiment Factory

Image by Lou Blazquez from Pixabay

For a biotech organization to scale its research program, it must balance the flexibility needed to explore a variety of biological hypotheses against the consistency needed to make a collection of observations into more than the sum of its parts.

The balance between these opposing forces will shift over time, from nearly complete flexibility while developing the core hypothesis of the program to an increasingly narrow set of flexible dimensions as the program scales.

The organization’s data platform contributes to this by making these decisions explicit, helping to enforce the appropriate consistency and by exploiting the consistency to generate insights.

Let’s unpack this.

By a research program, I mean a set of drug programs based on a common core hypothesis that determines a consistent overall structure for each program, and a template for defining new programs.

This may define the platform of a platform-focused biotech startup or it might apply to a research group within a larger R&D organization.

Enforcing consistency could mean narrowing down the therapeutic areas of interest, ruling out less informative types of readouts, standardizing sample preparation protocols, etc.

Each of these steps pushes the research program’s experiments towards a template, whether explicit or implicit.

This narrows down the types of hypotheses that can be explored to the most promising directions of exploration.

But it also ensures that the data from each experiment is more comparable to the data from the other experiments in the same template.

The more comparable the data is, the more reliably it can be combined or used to inform future experiments and draw broader conclusions.

There are two types of experiments in this evolution from flexibility to consistency: The biological experiments that drive each individual drug program and the process experiments that determine the best therapeutic areas, assays, protocols, etc.

Here I’m using a broad definition of an experiment as any activity where you don’t know what the outcome will be before you start it.

For any category of experiment, there will be some that are one-offs, done on their own in the context of pure exploration.

But there will also be cases where you’ll want to do repeated variations of the same experiment over and over.

These are the templates.

Repeated biological experiments come from narrowing the dimensions of flexibility. Repeated process experiments come from choosing a process to optimize, and exploring the best way to narrow the flexibility it allows for the biological experiments.

The first one is your innovation process. The second is innovating your process.

In both cases, the key is to make the experiments repeatable along the dimensions that need to be consistent, while maintaining flexibility along the dimensions that make it an experiment.

This is the experiment factory.

Narrow too soon and you risk cutting off more fruitful directions or throwing away work when you eventually realize your mistake. Narrow too late and you end up with even more inconsistent experiments and data.

This is where the data platform comes in.

A well designed data platform contributes to the experiment factory in three ways:

  1. It frames the narrowing process in terms of concrete technical decisions that force your organization to deliberately consider the choices you’re making.
  2. It helps enforce these decisions by making the narrowed-down options either the easiest choice or the only choice.
  3. It provides the tools for gathering the results and leveraging the consistency.

To perform these functions effectively, your data platform must support processes at different points on the flexibility/consistency scale while also providing a mechanism to smoothly migrate them along the scale.

Most pieces of software are optimized for a particular point on the flexibility/consistency scale, and many will have counterparts at other points on the scale.

Spreadsheets are optimized for flexibility. Databases are optimized for consistency.

ELNs are optimized for flexibility. LIMS are optimized for consistency.

Low-code/no-code platforms aim for the middle of the scale.

Designing a Chimera Data Platform allows you to leverage different software at different points on the scale for each component of your research program, as well as to plan for how to migrate components along the scale.

At what point do you move the data from the shared spreadsheet to a database table? How do you transfer the schema? How do you handle the back-fill, or do you just start from scratch?

Migrating a component to a new point on the flexibility/consistency scale is a delicate and time-consuming process, particularly if it involves switching systems.

For an organization to scale its research program, its data platform must allow it to manage processes at all points on the flexibility/consistency scale as well as providing tools to make migrating processes along the scale as efficient and reliable as possible.

Want to read more? Subscribe to my weekly newsletter where I’ll announce new blog posts and explore ideas from earlier posts in more depth.

3 thoughts on “The Experiment Factory

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: