If you spend your days worrying about clinical trial timelines, you already know the statistic by heart: most studies miss their original enrolment targets, and many end up needing protocol amendments just to get over the line. What is still less widely acknowledged is that this is not random bad luck. In trial after trial, the same pattern shows up: the protocol was designed without a clear picture of whether the people who carry the highest burden of disease could realistically take part, or would want to.
For underserved communities, this blind spot is systematic. It is not caused by bad intent. It is caused by bad data. We plan feasibility on the basis of who is already in our datasets, then act surprised when recruitment stalls in populations who have never been fully visible there in the first place. At the same time, regulators are moving decisively. EMA’s work on patient experience data, MHRA and HRA inclusion and diversity expectations, and FDA Diversity Action Plans all point in the same direction: sponsors should show, up front, how their trials will be inclusive and how patient experience has shaped design, not just report diversity figures afterwards.
You cannot meet that expectation with a slightly better recruitment campaign. You need infrastructure. What follows is a way to think about inclusion as clinical infrastructure: a decision‑grade data layer, built from lived experience and social context, and the workflows that bring it into protocol design before you lock the trial.
On paper, most organisations already have a strong evidence base for trial planning. Feasibility and protocol design are supported by three familiar inputs: electronic health records, claims and administrative data, and historic trial data. These sources are indispensable. They show you how many diagnosed patients fit your indication in a given health system, which therapies and care pathways are being used, and how previous studies have performed at particular sites or in particular countries.
Yet when you look at them through a health equity lens, they all suffer from the same structural limitation: they only see people who are already in the system. EHRs capture those who have a documented relationship with services. Claims and billing data capture those whose care sits inside reimbursed pathways. Historic trials tell you who enrolled in earlier studies that were subject to many of the same constraints you are trying to move beyond. None of these data sources can show you the people who have dropped out of care, who manage their condition through community networks, or who have learnt over years or decades that the healthcare system is not designed for them.
Imagine a Type 2 diabetes trial or a large metabolic study in NASH. Your feasibility work may be based on a robust analysis of clinic populations and previous research experience. But that will skew towards patients whose lives allow for regular appointments, predictable working hours, and stable access to care. It will not capture those working zero‑hours contracts or night shifts, trying to keep multiple jobs going, or juggling childcare and elder care. It will not capture those who are living with the condition but have disengaged from the very services you are assuming will funnel them into your study. It will not capture people whose experiences of discrimination or neglect in healthcare settings make participation in a clinical trial feel like a risk, not an opportunity.
There is also a dimension of trust that no standard dashboard can quantify. In communities with histories of exploitation in research, or contemporary experience of unequal treatment, the decision to join a trial is deeply shaped by relationships, stories, and collective memory. A site can sit in the right postcode and serve the correct demographics on paper yet still fail to enrol because no one in that community genuinely believes this trial is for them. When we treat that trust gap as noise in the system rather than as a predictable consequence of history, we accept delays and amendments that were, in principle, avoidable.
This is why recruitment failure so often appears to come out of nowhere: the risk was there all along, but in a data layer we were not collecting.
DThe industry’s dominant response to these failures has been to launch initiatives. We build diversity taskforces, commission one‑off outreach campaigns, hire new recruitment vendors, or convene patient advisory boards at the point when the protocol is already fixed and timelines are already under pressure. These efforts matter, and many are undertaken in good faith. But they live at the periphery of the development process. They are add‑ons, not foundations.
An infrastructure view looks very different. In this frame, inclusion is not a separate project owned by a separate team; it is a quality criterion applied at the same stages, and with the same discipline, as safety or operational feasibility. It has a data layer: a systematic way of capturing and structuring lived experience and social determinants of health from underserved communities. It also has workflows: clearly defined points in the lifecycle where that data is used to influence decisions.
The argument for this approach is not only moral. It is operational and regulatory. Operationally, it is the difference between discovering mid‑recruitment that your eligibility criteria have unintentionally shrunk the real enrolment pool to a fraction of what your model assumed, and catching that problem at the design table when changes are still cheap. Regulators, meanwhile, are steadily making inclusive evidence a condition of doing business. EMA’s work on patient experience data, MHRA and HRA inclusion and diversity plans, and the FDA’s Diversity Action Plans all formalise an expectation that sponsors can demonstrate how they have considered who is missing, why, and what they have done about it. An inclusion infrastructure is the most efficient way to generate that evidence without reinventing the wheel for every study.
At the heart of this infrastructure is a dataset that many organisations still do not have: decision‑grade lived‑experience data from underserved communities.
When we talk about lived‑experience data in clinical development, we are not talking about a handful of quotes tucked into a slide deck or an unstructured report from a single advisory board. We mean qualitative and quantitative information collected directly from patients, carers, and community members within underserved groups, in their own words and on their own terms, then structured and enriched so that it can sit alongside your other decision‑support tools.
Done properly, this kind of data captures things your EHR and claims systems simply cannot. It surfaces the practical barriers to participation: the cost of transport, the reality of shift patterns, the availability or absence of childcare, the complexity of fasting visits when food insecurity is a live concern. It describes language needs, cultural expectations, and the kinds of site environments that feel welcoming or hostile. It maps the informal networks through which health information circulates, whether in religious spaces, community centres, barber shops, online groups, or extended family structures, which in turn determines which channels are likely to reach people when you start recruiting.
Just as importantly, decision‑grade lived‑experience data provides insight into the decision‑making unit for participation. For many people, saying yes to a study is not an individual act but a family or community one. Understanding who influences that choice, and what their concerns are, is as important as understanding the clinical trajectory of the condition you are treating.
Collecting this kind of data at scale requires rigour. It means combining structured community surveys, qualitative interviews and focus groups, co‑produced insight with trusted community organisations, and, where appropriate, careful digital ethnography. It means sampling intentionally around the specific indication and geographies you care about, not just around convenience. And it means structuring the output so that it can answer practical design questions: which eligibility criteria will bite hardest in this community, which visit schedules are compatible with real lives, which site characteristics are essential if trust is to be built and maintained.
When those elements are in place, lived‑experience data ceases to be nice‑to‑have colour and becomes what it should always have been: a decision‑grade dataset that can change the course of a protocol before you lock it.
If inclusion is going to function as infrastructure, rather than a promise we make to ourselves, it has to show up at specific, recurring points in the development process. Four of them are particularly important.
The first is the pre‑protocol concept stage. This is the moment when a trial’s basic contours are still flexible. Here, a brief but focused health equity landscape, grounded in lived‑experience data, can make a material difference. It can show you which underserved communities are most relevant to your indication and markets, what the baseline level of trust in research looks like, and which structural barriers are most likely to affect participation. That knowledge can and should shape your initial thinking on eligibility, endpoint burden, and the balance between on‑site and remote procedures.
The second is protocol development itself. As eligibility criteria and key procedures come together, applying a structured health equity criteria screen can prevent some of the most common errors. This is where you examine each exclusion criterion through the lens of both safety and health equity. Does this threshold or requirement disproportionately exclude the very populations you need to understand? Could safety concerns be managed through enhanced monitoring rather than outright exclusion? Are you inadvertently baking in a digital access requirement that will cut out people whose lives are already over‑burdened by work and caring responsibilities? Making these assessments while the protocol is still in draft is far cheaper, and far more effective, than adjusting criteria after months of slow recruitment.
The third inflection point is feasibility and site selection. Traditional feasibility asks whether a site sees enough eligible patients and has the infrastructure to run the study. An inclusion‑aware approach adds another dimension: does this site have meaningful relationships with the communities you need to reach? Do staff speak the languages they will encounter? Has the site demonstrated cultural competence and flexibility in previous studies or in its routine care? By layering lived‑experience and community‑level insight on top of your existing feasibility metrics, you can build a site health equity profile that helps you distinguish between sites that look promising on a spreadsheet and sites that are genuinely positioned to enrol diverse participants.
The fourth is recruitment and retention planning. Rather than designing materials and channels in isolation and then testing them with patients, an infrastructure approach means co‑developing plans with community partners informed by the data you have already gathered. It involves choosing outreach routes that people actually use and trust, shaping consent processes that are linguistically and culturally accessible, and building in support mechanisms, from transport and scheduling flexibility to regular, trusted points of contact at the site, that make it possible not only to enrol but to stay in the study. In many trials, the health equity risk does not end when a participant signs the consent form; differential dropout can quietly unpick the diversity you worked hard to achieve. Seeing that risk at design stage gives you a chance to plan for it.
When these four touchpoints are in place, inclusion starts to look a lot less like a series of heroic efforts and a lot more like what it should be: a systematic part of how you design clinical research.
Consider a Phase II heart failure trial. The protocol includes an exclusion criterion for patients whose eGFR is below a certain threshold. On the face of it, this is a sensible safety measure to protect those with impaired kidney function. But when you layer in epidemiology and lived‑experience data, a more complicated picture emerges. In both the UK and US, Black and South Asian patients are disproportionately represented in the heart failure population and have higher rates of chronic kidney disease at diagnosis. A seemingly neutral eGFR cut‑off will therefore exclude a far higher proportion of those patients than of their white counterparts.
If that dynamic is not understood at design stage, it may only reveal itself months into recruitment, when diverse urban sites start to report unexpectedly high screen‑fail rates, or when it becomes clear that enrolment is skewing toward a narrow demographic profile. At that point, the options are unpalatable: extend timelines, introduce amendments, or accept a dataset whose generalisability and regulatory value are compromised.
Now imagine the same protocol developed with inclusion infrastructure in place. A pre‑protocol health equity review and lived‑experience landscape highlight heart failure as an area of high underserved burden. A structured criteria review surfaces the interaction between the proposed eGFR threshold and the kidney disease profile of relevant populations. Safety experts and clinicians with experience of treating diverse cohorts are brought into the conversation. A revised threshold, coupled with enhanced monitoring, is agreed. Sites with strong, longstanding relationships in affected communities are prioritised. The trial proceeds with a more realistic picture of who will be eligible and why, and the eventual evidence base speaks more credibly to the populations the therapy is intended to serve.
This is one example among many, but it illustrates a broader point: inclusive clinical trials are not the result of dramatic reinventions of the model. They are the result of hundreds of small, informed decisions made earlier in the lifecycle, guided by infrastructure that shows you what those decisions mean on the ground.
If your organisation does not yet have a fully built inclusion infrastructure, you can still begin to shift practice by adding a few simple questions to your next protocol review.
When you look at your eligibility criteria, ask whether any of them are likely to have a disproportionate impact on underserved groups, and whether the clinical rationale for that impact has been explicitly articulated. When you look at the visit schedule and procedure burden, ask whether this protocol could be navigated by someone working shifts, caring for small children, or living a bus ride away from the site. When you review feasibility and site selection, ask not only whether a site has the patients but whether it has demonstrated an ability to build and maintain trust with the communities you most need to reach. And when you discuss regulatory strategy, ask whether patient experience and health equity risk are being treated as an integral part of quality and benefit–risk, or as a separate communications issue
These questions will not build the full infrastructure for you. But they will start to change the conversations that matter. They will also surface where your greatest information gaps are, often in exactly the places where lived‑experience data would add most value.
Health equity risk in clinical research is easy to file under values: important, but somehow separate from the hard business of development. The reality is that it sits squarely in the middle of your operational and regulatory risk profile. Protocols that are blind to the lived reality of underserved communities are more likely to run late, need amendments, generate unrepresentative data, and run into questions from regulators and payers about how far their findings can safely be generalised.
Treating inclusion as infrastructure is about moving that risk out of the unknown column. It is about recognising that the patients your trials are currently missing carry a disproportionate share of the disease burden, and that understanding their lives is as much a scientific necessity as it is an ethical one. It is also about giving your teams the tools to act, not just more pressure to do better.
If you want to explore this in more depth, including the economics of delay, a fuller analysis of what your existing data misses, and a detailed checklist you can apply across a portfolio, you can draw on the white paper that underpins this article, titled “Health equity risk in clinical research: How to see patient recruitment failure before it happens.” It is written for VP and Director‑level leaders who want to make health equity a standing item in protocol review meetings, not an afterthought once recruitment is already under strain.
And if you would like to stay close to the practical side of this, seeing what inclusion infrastructure looks like in heart failure, metabolic disease, oncology, and beyond, you can join the Unwritten Dispatches , a weekly newsletter focused on stories from the front line of building Unwritten Health.