The Giant Hidden Problem Plaguing Tech Biotechs

I’ve spent much of the last few years thinking about how to integrate data teams into biotech organizations, mostly circling around an idea I couldn’t quite put my finger on. But I recently learned it has a name:

Shared Mental Models

This is pretty much what it sounds like: Everyone brings their own set of mental models to the work that they do, mostly based on their experience and their educational background. For a highly-interdisciplinary team to function, its members need to adjust these individual mental models, or adopt new ones, to be closer to each other. The overlap of what they end up with is called a shared mental model.

It’s both intuitive and proven in published research that teams with better shared mental models will work more effectively. I’ll explain below what “better” means, but these have been studied in a wide range of different industries. However, I’m not interested in any industry.

I believe that a lack of shared mental models causes much of the friction when “tech biotech” organizations try to merge biotech with pure tech, even if we don’t realize it. In the rest of this post I’ll explain where this friction comes from, why we often don’t notice, and some of the things we can do about it.

If this idea resonates with your own experience, please let me know in the comments below. Also consider signing up for my weekly newsletter where I send out short ideas to get you thinking about this in your day-to-day, and announce upcoming blog posts that will go deeper into this topic.

The Cycle

It’s no secret that a successful tech biotech relies on wet lab scientists being able to communicate and collaborate with folks from a data background. But to understand why shared mental models are so crucial, it’s worth digging into exactly what this coordination looks like.

At a high level, it follows a cycle with two major branches:

  • Wet lab teams transfer data and metadata/context to data teams for analysis.
  • Data teams communicate analysis and interpretation back to wet lab teams.

For each of these legs to be successful, both teams must agree on the nature of the interaction: What data should be transferred and how should it be organized? What questions should they try to answer, and what methods will provide those answers?

Answering these questions allows the teams to design effective and efficient modes of interaction. But to go beyond the easy and obvious to something truly transformative, they need a collective understanding of deeper questions:

  • What would an ideal outcome look like?
  • What tools are available that could potentially achieve this outcome, including non-standard approaches?
  • What are the costs, benefits and risks of these different tools?

Different members of the team probably have different ideas and different levels of familiarity with each of these questions. Often one side understands the goals, and the other understands the tools. But to get beyond the least common denominator approach, each member of the team must be familiar with all of them (and others).

The answers to these deeper questions are part of a mental model. A shared mental model is what allows all team members to understand the full picture. Without it, all you get is a missed opportunity.

Task and Team

We often talk about wet lab and data teams needing a shared language or vocabulary. This is absolutely true. But it puts the focus on the meanings of individual words, as if just by writing down some definitions we can answer these deeper questions. Language is part of a shared mental model, but it’s not enough.

A mental model provides a way to explain why things happened in the past, predict what will happen in the future, and continually update these predictions in the present. Shared mental models ensure that the explanations and predictions are more consistent between members of the team, making communication more efficient and allowing faster, better decisions.

The literature on shared mental models divides them into two types:

Task mental models address the deeper questions I mentioned above, related to goals, tools, priorities, and too many other things to list.

Team mental models address the ways that the team operates: Who’s on the team, what roles each member plays, team norms for communicating and coordinating, how each member likes to work and interact, etc.

When worlds collide

Most people have experience building a team mental model from scratch, whether or not they call it that. We’re used to different people having different working styles. So we expect to form a new team model when we join a new formal or implicit team.

Task mental models, on the other hand, depend on the framework that you’re applying to a particular type of problem. If you’ve spent your entire career approaching similar problems from the same perspective – biology, chemistry, data science, etc. – you won’t get much experience adjusting your task mental models.

In a tech biotech organization, we’re deliberately trying to create a new framework that combines the best of these different approaches. That means everyone needs to develop new task models. But it’s not just the model that’s new – for many people involved, the meta-level task of adopting a new task model is unfamiliar. So what happens when they suddenly need to do it?

Natural Selection

But why do different fields adopt different task mental models in the first place?

A team’s task model determines the path they follow through different approaches, and thus how quickly they get to the right one (if at all). Given teams that are equally smart and hard-working, the one with the task model that best fits the given context and problem will win. Biology, where cycle times are slow and there are more exceptions than rules, requires a very different task model than the marketing and social network worlds that created data science. Tech biotech, a different context from both of those, requires a completely novel task model – maybe a hybrid of the two, maybe not.

But that’s easier said than done. The pioneers who managed to find a task model that fit each of these original contexts became its leaders because they got to the solutions faster. The people that followed them learned the task model implicitly by emulating their thought patterns. As long as the task model continued to work, they didn’t give it a second thought. Why would they?

And yet, here we are: That’s exactly what they need to do now.

Defining “better”

The research on shared mental models defines two metrics: Similarity is how closely the mental models of the individuals on a team are to each other, while accuracy is how close the team’s shared mental model is to an externally defined, ideal model. A few different methods are used to measure these in the literature, but I haven’t yet found one that seems practical outside a controlled study.

Depending on the situation, similarity or accuracy may be more important. Not surprisingly, they tend to be highly correlated. But they do give us two separate goals:

  • To improve similarity, we should encourage individuals to update their mental models to more closely match each other.
  • To improve accuracy, we should define an ideal model and encourage the team to adjust their individual mental models to match it.

The research on model accuracy mostly focuses on contexts that have enough history to establish a clearly defined ideal model, such as leadership models in the military and care models in healthcare. In these contexts, you can explicitly write down the principles that you want your shared mental models to include.

For tech biotechs, I don’t think we’re there yet. There are things you’d probably want an ideal shared mental model to include, like the FAIR data principles, repeatability of experiments, an emphasis on statistics. But this early in the natural selection process, we haven’t seen enough examples to really know what works.

In fact, each biotech organization will probably have a slightly different ideal shared mental model. Maybe even each team within the organization.

Finding a Solution

So, how do we fix the issue? How do we define an ideal shared mental model? How do we get everyone in the organization to adopt it?

I don’t know.

Like I said above, I only started learning about this recently. And while there’s a fair amount of published research, it hasn’t yet found any silver bullets.

Based on what I’ve read, my own experience and conversations with others, my current theory is that there are three stages to addressing these issues:

First, try using shared mental models as a lens to examine the interactions between the wet lab and data teams in your own organization. You may start to notice things you didn’t notice before. Are people talking to each other without actually communicating? Are they having trouble deciding who’s capable of solving the shared problems? Are they circling around decisions that should be easy? (These are all links to my newsletter, which you should sign up for.)

Once you identify these immediate problems, and better understand the underlying cause, you can use whatever leadership tools you’re most comfortable with to handle them.

Second, start doing things to encourage these teams to communicate the assumptions they’re making based on their mental models. Start conversations by reviewing context and goals. Ask questions that encourage people to think about and put words to the things they take for granted.

As team members start to recognize where their mental models differ, and compare the options that they take for granted, they’ll begin to adjust their assumptions to match their colleagues.

Finally, as you begin to see common themes emerge from this process, write them down and share them. Explicitly call out when you see others embracing these themes or failing to. Ask others to discuss how they have or haven’t been following this model.

Conclusion

We don’t notice our mental models because they consist of all the things we take for granted. So it’s often hard to notice when our colleagues have different mental models. I believe that this hidden nature makes building shared mental models the biggest problem facing tech biotech organizations today.

Above, I outlined a few things I’m trying, that you can try too. I plan to go deeper into different aspects of this in upcoming blog posts, based on both the published research and what I learn from other folks in biotech who are thinking about these things. If these ideas resonate with your experience, I’d love to hear from you either in the comments below or by sending an email to scalingbiotech@substack.com. And consider signing up for my substack newsletter, where I send short, weekly notes about these same topics.

4 thoughts on “The Giant Hidden Problem Plaguing Tech Biotechs

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: