All articles

The Biopharma Intelligence Gap

Most biopharma intelligence teams are running two races at once: the sprint to get smart fast, and the marathon of keeping the competitive picture current. In theory they should compound. In practice they never do. A look at why, and what needs to change.

Andrew Pannu
April 14, 2026

We are all managing the same underlying constraint. Whether we sit in business development, search & evaluation, competitive intelligence, or commercial strategy, we are expected to make decisions quickly in a domain where the ground shifts constantly and the evidence is scattered across sources that were never designed to fit together.

That work tends to split into two concurrent jobs.

The first is the sprint — get smart on something new, fast. A new modality, target, indication, company, technology, asset, trial, or deal context. Sometimes the sprint is a few hours to be credible in a meeting. Sometimes it expands into a multi-week dig because the initial question turns out to be a stack of questions: what exactly is happening scientifically, who else is pursuing it, how has the clinical picture evolved, what are the adjacent competitive moves, and what does all of that imply for a decision we need to make. At the start, we rarely know whether the work is headed toward something consequential or something that fades in two weeks, but we still have to synthesize enough signal to move the organization forward. And the information most likely to do that is often the least accessible: preclinical programs that haven’t broadly surfaced, developments in markets like China or under-the-radar abstracts and patents.

The second is the marathon: maintaining a coherent competitive picture as it changes underneath us. While we are deep in one sprint, trials read out elsewhere. Partnerships form. New data appears in publications, patents, posters, and registries. Companies reposition. Assets progress and stall. The expectation is not simply that we are aware of these changes, but that we can place them into a living model of “what matters,” updated continuously enough that our next sprint begins from a strong baseline rather than from scratch.

In theory, those two jobs should compound into a single intelligence flywheel: the better our standing worldview, the faster our deep dives; the more deep dives we do, the richer that worldview becomes.

In practice, it rarely compounds, because the infrastructure powering our intelligence doesn’t support it.

I was talking to a head of strategic intelligence at a top-20 pharma who walked me through what it takes to produce a single deliverable for her executives. Her team subscribes to five or six separate databases. Each covers a different slice of the landscape. None of them talk to each other. To build one report, she logs into each platform separately, pulls what she can, compiles it manually, then matches it against the company’s internal data to tell the full story. The process takes eight weeks. And even then, she told me she never gets all the answers: she ends up filling in the gaps herself, going back to primary sources one by one.

Pipeline databases and commercial platforms provide structure, but the structure comes at the expense of context. They help us orient quickly: what exists, who is running what, and where the obvious comparables are. They rarely answer the question we actually need to answer: is this asset differentiated enough to justify a serious upfront, given what’s coming behind it? They present a representation of the world that is inherently lossy: limited fields, fixed taxonomies, and an implicit assumption that the hard part is retrieval rather than synthesis. We are the ones who turn retrieved facts into decision-relevant insight.

Document search tools improve access to unstructured sources: filings, transcripts, publications, expert calls, news. But “finding” is different from “knowing.” They can tell us what was said. They cannot tell us what it means for our position, how it connects to the broader landscape, or how it changes our priors. And because they lack a domain-native entity model, they treat each document as its own universe rather than as an update to a shared picture.

For the biggest questions, we still commission consulting engagements, and for good reason: when the stakes are high, synthesis matters, and experienced humans can create clarity. But those engagements come with long timelines and steep costs, and the work product is episodic. The output is a snapshot, delivered at the end of the project, and it begins to decay the moment the world moves on.

So most of us end up stitching together structured databases, unstructured document search, and occasional bespoke analysis. We spend much of the time assembling and reconciling information rather than analyzing it. The database does not know what the consultant learned. The consultant does not know what the internal team flagged last quarter. A new sprint does not inherit the full trail of prior work.

And that’s just the external world. Internally, enterprises are sitting on massive volumes of proprietary intelligence — company decks, scientific assessments, deal memos — that are even more fragmented. The deep dive one team member did on a space last quarter doesn’t inform the next person asking the same question. The insights from a licensing discussion that didn’t close don’t feed back into the competitive picture. The internal knowledge base that should be an institution’s greatest compounding asset behaves instead like a write-only archive.

The result is predictable. The most consequential decisions in our industry — whether to advance or kill a program, how to price a licensing deal, how to position for launch against multiple competitors, whether a partnership is worth a large upfront — are often made with a competitive picture that is partially outdated, manually assembled, and difficult to keep current under real operating conditions.

We’ve all tried AI

So we tried the AI tools. For certain things — summarization, first-pass synthesis, getting oriented on a new space quickly — they are genuinely useful. That part is real and getting better.

But the first time we tried to use one for something that actually mattered — a competitive landscape going to a VP, a diligence memo informing a deal decision — the cracks showed up fast. A hallucinated trial. A partnership that doesn’t exist. A mechanism described in a way that’s subtly wrong in exactly the way that would embarrass you in front of someone who knows the science.

The problem was never that smart people couldn’t reason well. The problem is that the inputs to the reasoning are incomplete, stale, fragmented, or wrong — and no one owns the system that’s supposed to keep them current.

No one owns the underlying picture

This is the realization that took me the longest to arrive at, and it’s what led me to build something new.

Life sciences “data” is not one dataset; it is an evolving web of entities and relationships. Targets map to pathways. Assets map to mechanisms. Trials map to endpoints, populations, comparators, and readouts. Companies map to portfolios, partnerships, and financings. Publications and patents change the interpretation of mechanisms over time. We all know this. We navigate it every day. And yet nothing we use represents the world this way.

The databases give us structured fields but strip out the connective tissue. The AI tools reason about whatever they have ingested, with no way to know whether the picture is complete or current. The consultants build a point-in-time view that begins to decay the moment they deliver it. No one is doing the continuous, unglamorous work of maintaining a verified, connected, living model of the landscape that reflects how we actually think about the space.

This is equally true inside our own organizations. The deep dive someone built conviction on six months ago lives in a folder no one remembers. The expert who developed the institutional view on a mechanism moved to another team, and the knowledge went with them. A consulting engagement from last year produced insights that would shortcut this quarter’s sprint, but the deck is buried and the context around it — why certain conclusions were drawn, what was excluded and why — is gone. Institutional knowledge doesn’t just fail to accumulate. It actively decays. Every sprint that starts from scratch is evidence of that decay.

A verified, current view of the biopharma landscape is not something you configure once. It is an ongoing operational posture: domain-specific curation, daily reconciliation of conflicting signals, a process for incorporating new information that preserves lineage and explains why the picture changed. This is unglamorous, intensive work. It does not generalize across industries. It does not emerge from a smarter model. It gets done by teams whose entire job is maintaining ground truth in a specific domain.

Someone has to stand behind the answer

Even if someone solved the ground truth problem — even if we had a system that maintained a verified, current, connected picture of the landscape, there’s a second issue that took me longer to see clearly.

The highest-leverage moments in our work are inherently ambiguous. A Phase 2 readout that could be interpreted in three ways. Conflicting signals about a competitor’s partnering intent. A mechanism where the preclinical promise and the clinical reality don’t line up. These are the moments where decisions actually get made, and they almost never resolve to a clean answer.

When we commission consulting work for these moments, we’re paying for something specific: someone who will take a position. Not the slides — the willingness to say “we checked every number, and here’s what we think.” That is what those engagements buy. But they’re slow and expensive, and if we’re honest, 80% or more of the time and cost goes into commoditized assembly work: getting the data together, cleaning it, drafting the first cut of materials. The actual interpretation, the 20% that matters, gets compressed into the end.

Every general-purpose AI tool I tried had the opposite posture. They’d synthesize, summarize, and generate — often impressively — and then somewhere in the output, there’s a version of “this might be wrong, please verify independently.” Which is honest. They cover everything from cooking recipes to contract law. They can’t stand behind the answer in any one domain because they’re not built to go that deep.

But for the person sitting with that ambiguous readout, “please verify independently” just pushes the hardest part of the work back onto them. The AI made the easy part faster and left the hard part untouched.

The consultants stand behind the answer, but take months and cost a fortune. The AI tools are fast and cheap, but won’t commit to anything. And the people who actually carry the decision — the in-house teams — are left triangulating between sources that don’t talk to each other and tools that won’t take responsibility for what they produce.

What comes next

All of this is why we built Sleuth. The next piece will be about what that looks like in practice: the product, the approach, and the principles behind how we close this gap.

See what your team has been missing