A better ELN won’t solve your problems

In my last post, I argued that a lack of shared mental models is one of the biggest problems facing tech biotechs because it impedes the cycle of information flow between wet lab and digital teams. This time, I want to explore the first branch of that cycle: How data and metadata get from the lab to an analysis-ready form. And I’m going to do it by examining the software that’s typically used in the lab, through the lens of the mental models that the software both relies on and reinforces.

Everyone loves to talk trash about ELNs and LIMS, and despite what the title of this post might suggest, that’s not what this is about. Instead, here’s what I’ll argue: Most of the commercial software used in the lab today was designed to solve problems that were major headaches 10 or 20 or more years ago. So it’s designed around mental models that are optimized to solve those problems. But these aren’t major problems today, mostly because the software has done such a good job solving them. Instead, there are new major problems that they don’t address, and that require software designed around new mental models.

A lot of the ill will around these applications, beyond the typical thankless nature of IT systems, comes from an unwarranted expectation that they will solve problems they weren’t designed to solve. Sometimes the systems reinforce this expectation by adding on modules and extensions that attempt to expand the scope. But the only way we can actually solve these problems is by developing new mental models, and designing new types of software that both fit into and reinforce them.

In this post, I’m going to outline a major problem that I think we don’t have a great solution for today, describe how the problems that ELNs and LIMS were designed for are different, then speculate a bit about what better software might look like.

If this idea resonates with your own experience, please let me know in the comments below. Also consider signing up for my weekly newsletter where I send out short ideas to get you thinking about this in your day-to-day, and announce upcoming blog posts that will go deeper into this topic.

The Problem

It wasn’t that long ago that all scientists did their own analysis. The same person would set up the samples, take the measurements, drop the data into a spreadsheet, plot some charts, then paste it all into a presentation. For a lot of scientists, this is still the standard. But there are an increasing number of cases where it isn’t.

As more areas of biology begin to involve larger and more complex data sources, it’s becoming more common for one team to collect the data, then pass it off to another team for the analysis. This is where bioinformatics and computational biology came from. But now, as organizations begin to explore applications of machine learning to lab data, including the small data that Excel can handle, data scientists and ML scientists are beginning to find themselves on the receiving end.

A wet lab scientist who analyzes their own data will typically look at one or two experiments at a time, with complete context about how the data was collected and what question they’re trying to answer. They can tune the data collection to that one question for that one experiment. For the next experiment, it might make sense to collect it completely differently.

On the other hand, when someone else is doing the analysis, they won’t automatically have the full context. In fact, they may be asking a question that the person collecting the data didn’t even think of and will often need to use data from multiple experiments. For that, they need all the data and metadata – what defines the missing context – to be in a consistent form.

The data itself typically comes from an instrument, so it looks the same each time. Getting it in front of the analyst may not be easy, but it’s a technical problem and a good data engineering team can usually make progress regardless of shared mental models. (There are also good off-the-shelf solutions for doing this at scale such as TetraScience and BioBright.)

The metadata, on the other hand, is collected by the wet lab scientist. If they’re doing their own analysis, this is pretty easy: they can write down just enough to remind them when they’re building that slide deck. If you’re lucky, it’s in a spreadsheet but they could probably get away with post-it notes. Enforcing consistency and collecting information that they know they’ll remember would be a waste. This is what shapes their mental model.

So here’s the problem: How do you shift the wet lab scientist’s mental model to where they understand that the extra effort to collect complete and consistent metadata is a net positive? What would software that supports and reinforces this mental model look like?

ELN and CYA

It’s commonly noted that Electronic Lab Notebooks (ELNs) were designed to replicate paper lab notebooks, but it took me a while to fully understand what that means. The main purpose of a lab notebook is to verify that an experiment was done, and what its outcomes were. While this verification can happen in a number of different contexts, a central one is demonstrating that you ran the experiment before someone else so you can claim ownership of the idea.

The key here is verification over communication. In the target use case, you know what you’re looking for and you start off with more or less complete context. So the design of any ELN will tend to favor immediacy over generality in how it organizes and presents data: Knowing the context – such as an experiment id, drug program and/or date – allows you to find things quickly. If you don’t have that context, you’re effectively lost. The alternative, favoring the generality that would allow you to find data without complete context, would require the ELN to organize things very differently.

Another important feature of the ELN’s core use case is that it must capture absolutely every experiment, whether it’s a one-off, or the hundredth iteration. A user must be able to upload whatever documentation and data they have, and it has to be easy enough that they’ll actually do it. So ELNs typically favor flexibility over consistency: You can upload your data as Excel files, PDFs, even JPEGs. As long as someone can view it at some later date, the ELN has served its purpose.

Many modern ELNs have additional functionality that allows users to shift towards generality and consistency. They’ll often allow users to enter structured data in the same form across experiments (consistency), select values from a standardized vocabulary (consistency and generality) and allow viewers to search this structured data when they don’t start with complete context (generality).

However, the ELN can’t force users to enter this structured data because that could block users from recording one-off experiments for which the structured fields don’t make sense. Or it might just create enough friction to discourage users from capturing absolutely everything. Moreover, because these structured data features are an afterthought, they tend to be less polished, and less usable than they would be in an application built around structured data.

ELNs are designed to be used after the experiment, including the analysis, is complete. This makes sense for a workflow in which a single person manages the whole process, then gathers everything at the end. But for a process in which information is handed back and forth between data collection and analysis, most ELNs don’t facilitat tracking and coordinating.

In other words, the core use case of an ELN forces it to allow, or even encourage, users to work in ways that reinforce mental models favoring immediacy and flexibility.

LIMS and SOP

The typical context for a Laboratory Information Management System (LIMS) is, in some ways the opposite of an ELN. You start using a LIMS when you’ve done a process or experiment enough times to have a well defined protocol that you want to follow very precisely, every time you do it. The goal of the LIMS is to make sure that you, or anyone else doing the experiment, follows that protocol.

There are typically two sides to a LIMS: 1) It reminds the user of what step in the process they need to do next and how. 2) It collects data about the process that can be used later to verify its integrity and quality. For this post, we’ll look at the second one.

Because of this context, a LIMS will be optimized for consistency over flexibility. Unlike an ELN that needs to capture every experiment, a LIMS will address a carefully selected set of experiments, and needs to collect the same data in the same exact form each time. A LIMS is also optimized for the immediate use case of quality control. So, like an ELN, a LIMS will favor immediacy over generality: It typically only collects the data it needs for this one use case.

So if the wet lab team is using a LIMS for the type of experiment your data team is interested in, you’re probably in good shape. But often the kind of exploratory analysis that would be most impactful for your data team applies to early-stage experiments that haven’t been standardized or aren’t in a context that requires tight quality control. Trying to use a LIMS for these experiments would be bad for everyone involved, and suggesting it will get you laughed out of the office.

A better way?

Better software alone isn’t going to solve this problem. The hard work is communicating and collaborating between the digital and wet lab teams to define a shared mental model that takes into account both the practicalities of the wet lab and the data quality requirements of the digital teams. But I think it’s worth imagining: once you’ve done this – if you can do it – what would the software that supports this new world look like?

First off, it needs to sit somewhere between ELNs and LIMS on the spectrum from flexibility to consistency. It should be easy for a wet lab user, or a data scientist they’re working with, to define a new type of experiment and the fields that should be collected each time it’s run. It should be slightly harder, but still reasonable, to update those fields when one’s understanding of the experiment changes. (Otherwise users will wait too long to register a type of experiment in the first place.)

Second, it needs to be much closer to the generality end of the immediacy vs generality spectrum. It should encourage users to use pre-defined vocabularies rather than free text. It should be able to take metadata in a form that a wet lab scientist is familiar with, and transform it into a table that a data scientist can query along with all the other experiments.

And third – probably the hardest one – it needs to be more pleasant and less time consuming than whatever the current workflow is. Otherwise, good luck getting anyone to use it. It might even need to be 10 times easier. Maybe it automatically creates an ELN entry with auto-filled fields and data. Maybe it notifies scientists when their instrument data is ready and runs the initial QC and analysis. I don’t know. Like I said, this is the hardest one.

Where are we?

In terms of the four stages of enterprise software, the technical problem of transferring instrument data is in stage 3: There are at least two commercial solutions available, but a lot of organizations still build their own. One of the blockers to getting to stage 4 (at least in my very limited experience) is that for a small number of instruments, an internal solution still feels easier than adopting a complete platform. Many small biotech orgs don’t have many instrument that clearly require external analysis (sequencers, digital microscopes, etc.) and don’t think about all the other instruments (qPCR, flow cytometry, etc.)

The metadata part of the problem, on the other hand, still seems to be in stage 1. Pretty much everyone I’ve talked to about this problem has built or is trying to build a custom solution tailored to their particular data sources and workflows. Many of these rely heavily on Excel, at least as a prototype (which isn’t necessarily bad). The vendors such as TetraScience and BioBright that solve the data part of the problem also have ways of addressing metadata, but it still requires a lot of customization. So do most ELNs. But there isn’t an off-the-shelf solution that directly addresses this problem.

From the perspective of shared mental models, this shouldn’t be surprising. Software is a symptom, not a cause. Organizations won’t adopt software that doesn’t fit their shared mental model, so until a bulk of biotechs begin adopting shared mental models that better address this problem, there won’t be a market for better software. We’ll know that has happened when we begin to see enough successful custom solutions within biotech organizations to spot a pattern that can be generalized, and when we start to see commercial options along these lines. There are already a couple that may fit this bill such as Kaleidoscope and Colabra. But only time will tell.

Conclusion

The widely available software such as ELNs and LIMS solve important problems, but they’re built around mental models that don’t fit an environment in which different individuals or teams are collecting lab data and doing the analysis. This creates a gap that we need to fill with updated shared mental models, and new types of software to support and reinforce them. This process is underway, but it’s still very early and I’m excited to see what comes out of it.

One thought on “A better ELN won’t solve your problems

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: