Clinical development teams talk a lot about patient centricity. In practice, patient input still arrives late, in small quantities, and in formats that are hard to use. A handful of quotes from an advisory board, a quickly convened focus group, a few slides summarising survey results. These fragments might shape a line in the informed consent form or tweak a recruitment flyer, but they rarely change the core design of a protocol.
That gap is no longer tenable. Regulators in Europe, the UK, and the United States are signalling that patient experience data is not a nice supplement to clinical outcomes; it is part of the evidence base for assessing benefit, risk, and the credibility of a trial’s results. At the same time, sponsors know that protocols designed without a realistic view of people’s lives are more likely to run late, need costly amendments, and produce data that does not generalise to the populations who will actually receive the medicine.
The problem is not that we lack patient stories. It is that we have not turned those stories into decision‑grade data that can stand alongside epidemiology, operational metrics, and regulatory requirements. This article explores what decision‑grade lived‑experience data looks like, how it differs from traditional patient engagement, and how it can de‑risk protocol design before you lock the study.
Traditional approaches to patient engagement in clinical research follow a familiar pattern. A protocol is drafted based on scientific, regulatory, and operational considerations. Once the broad shape is in place, a small group of patients or advocates is invited to review it. They are asked to comment on burden, clarity, and perceived barriers. Their feedback is captured in notes or slide decks and shared with the team.
This is better than no engagement at all, but it has clear limitations.
First, it is late. By the time patients see the protocol, most major design decisions have been taken. Eligibility criteria, visit schedules, endpoints, and procedures are often treated as fixed, even when patient feedback suggests they are misaligned with real lives. At best, the team can soften language, adjust a visit window, or tweak recruitment materials. The structure of the trial remains largely unchanged.
Second, it is narrow. A handful of voices can provide valuable perspective, but they cannot reliably represent the diversity of experience within a disease area, let alone across geographies, socio‑economic groups, or cultural contexts. They certainly cannot stand in for communities who have historically been excluded from research entirely, and whose experiences of care and trust may be very different.
Third, it is unstructured. Feedback is often captured as notes, quotes, and impressions rather than as data that can be analysed, segmented, and reused. It is difficult to link specific insights to specific design decisions or to revisit them when planning future trials. Teams are left relying on memory and informal narratives rather than a reusable insight base.
The result is a pattern many development leaders will recognise. Advisory boards and focus groups generate good discussion and compelling stories. Those stories influence the tone of presentations and the language of external communications. They rarely change the underlying risk profile of the protocol.
Decision‑grade lived‑experience data is different in kind, not just in volume. At its core, it is information that is:
Collected directly from patients, carers, and community members, particularly from underserved groups who are under‑represented in existing datasets.
Structured and enriched with social determinants of health so that it can answer specific questions about feasibility, burden, access, and trust.
Designed to be reused across programmes and indications, rather than discarded at the end of a single study.
Importantly, decision‑grade does not mean quantitative only. It means that qualitative and quantitative inputs are gathered and organised with enough rigour that clinical, statistical, and regulatory stakeholders can rely on them to inform real decisions.
Where traditional patient engagement might tell you that “some people found the visit schedule hard to manage,” decision‑grade lived‑experience data tells you which groups, in which contexts, for what reasons, and with what likely impact on recruitment and retention
Consider four domains that decision‑grade lived‑experience data can illuminate.
First, practical barriers and enablers. How do work patterns, caregiving responsibilities, transport options, housing conditions, and digital access shape people’s ability to join and stay in a trial? What forms of support would make the difference between participation being possible or impossible? These questions cannot be answered from electronic health records or claims data. They require direct engagement with people in their own contexts.
Second, trust and relationships. How do people in a given community view clinical research and the institutions that run it? What prior experiences with healthcare systems, discrimination, or exclusion shape those views? Which organisations and individuals are trusted enough to introduce a trial or answer questions credibly? This is especially critical for communities who have historical reasons to be cautious of medical research.
Third, information and belief landscapes. What do people already believe about the disease area, the kinds of treatments under investigation, and the meaning of a clinical trial? Where are the gaps, misconceptions, and fears that might influence consent and adherence? Understanding this context can shape not just recruitment messaging, but also endpoint selection and patient‑reported outcomes.
Fourth, decision‑making dynamics. Who is involved in the choice to participate? In many settings, the decision is not individual but shared with family, caregivers, or community leaders. Knowing whose perspectives matter and what they care about is central to designing consent processes and communication strategies that work.When these elements are collected systematically, you end up with a dataset that can do more than illustrate a point. It can guide protocol design the way a well‑built epidemiological model or feasibility analysis does.
Turning lived‑experience into a decision‑grade asset is as much about process as it is about method. It requires clarity about the questions you want to answer, the populations you need to hear from, and the points in the development cycle where insight will actually change decisions.
A practical approach usually has five steps.
First, define the high‑risk indications and populations. Start by identifying where the health equity and feasibility risks are greatest. Which indications have a high burden in underserved communities? Where has recruitment routinely underperformed, particularly for certain demographic or socio‑economic groups? Which pipelines or upcoming programmes will be most exposed to evolving expectations on inclusion and patient experience data?
Second, map the communities you need to engage. Within those indications, which communities are both clinically relevant and historically under‑represented in research? This mapping should consider geography, ethnicity, socio‑economic status, and other factors such as immigration status or disability. It should also identify existing community organisations, advocacy groups, and networks that might act as partners.
Third, select appropriate methods. For some questions, structured surveys with carefully sampled respondents will be most effective. For others, in‑depth interviews, focus groups, or community panels will be better suited to uncovering nuance and context. Digital ethnography may be appropriate where health‑related conversation is already happening in online spaces. The common thread is that methods are chosen to match the questions and populations, not simply for convenience.
Fourth, structure the data for reuse. This is the step that often differentiates decision‑grade lived‑experience work from one‑off projects. Insights need to be coded, segmented, and linked to specific design levers such as eligibility criteria, visit schedules, procedures, site characteristics, and communication strategies. They should be stored in a way that allows teams to query, for example, “What do we know about visit burden and work patterns for people with NASH in this region?” or “What have we learned about transport barriers for heart failure patients in similar trials?”
Fifth, embed usage into governance and review. Data only becomes infrastructure when it is part of the way decisions are made. That means specifying where in the development process lived‑experience findings will be reviewed, who is responsible for bringing them into the conversation, and how they will be documented in regulatory and governance materials. It also means agreeing in advance what kinds of decisions this data can and should influence.
Once you have a decision‑grade lived‑experience layer in place, there are several points in the protocol development process where it has outsized impact on risk.
The first is eligibility criteria. Lived‑experience data, combined with epidemiology, can highlight where apparently neutral exclusions are likely to disproportionately remove underserved populations from the enrolment pool. It can flag criteria whose practical effects on people’s lives make participation unworkable, even if they are clinically acceptable. This allows teams to distinguish between thresholds that are essential for safety and those that can be adjusted or managed in other ways.
The second is visit schedule and procedure burden. Insight into work patterns, caregiving responsibilities, and transport realities can reveal where a schedule that appears reasonable on paper is incompatible with the lives of people you most need to reach. This can lead to adjustments such as consolidating visits, using local or community‑based options for some procedures, or introducing remote assessment where appropriate.
The third is site selection. Lived‑experience and community‑level insight can help differentiate between sites that have access to relevant populations in theory and those that have credibility and relationships in practice. It can guide the selection of sites that are capable of engaging underserved communities effectively, and highlight where additional support or partnership is needed.
The fourth is recruitment and retention planning. Understanding how people hear about research, who they trust, and what support they need to stay in a study allows for more realistic, targeted plans. It can point to community partnerships that extend beyond traditional site‑based outreach, inform the design of consent conversations, and shape retention strategies that recognise the pressures people are under.
At each of these points, decision‑grade lived‑experience data reduces the risk that you will discover health equity and feasibility problems only once recruitment is underway. It moves those discoveries earlier, when there is still time to act.
Regulatory interest in patient experience data is sometimes framed as a burden: another box to tick, another set of documents to prepare. Seen through the lens of decision‑grade lived‑experience data, it is better understood as an opportunity to formalise and reward what good teams are already trying to do.
EMA’s work on patient experience data, MHRA and HRA inclusion and diversity guidance, and FDA Diversity Action Plans all acknowledge that data about how patients experience disease, treatment, and research can inform benefit‑risk, label claims, and the credibility of trial findings. What they also make clear is that such data needs to be gathered systematically, with appropriate methods and documentation.
The same lived‑experience infrastructure that de‑risks your protocol can, if designed well, also underpin your regulatory narrative. It allows you to show not just that you spoke to patients, but that you collected and used patient experience data in a structured way to shape design, feasibility, recruitment, and retention. It gives you a basis for explaining why you made the trade‑offs you did, and for demonstrating that you have considered who is missing and what you have done about it.
In an environment where inclusive evidence is increasingly seen as a condition of market access, not just a reputational nice‑to‑have, this dual function matters. It means that investment in decision‑grade lived‑experience data is not an act of compliance for its own sake, but a way to improve both the science and the business case of your development programmes.
If you are already wrestling with recruitment timelines, protocol amendments, and questions about how representative your data really is, you are already living with the consequences of decisions made without decision‑grade lived‑experience data. The opportunity now is to move those decisions upstream, so that understanding how underserved patients will experience your trial is part of the design process, not an after‑action review.
You do not need to build a perfect system overnight, but you can choose to stop designing protocols in the dark. Bringing structured, SDoH‑rich lived‑experience data into eligibility criteria, visit burden, site strategy, and recruitment and retention plans gives your teams a clearer view of the world they are trying to operate in, and a better chance of getting to first patient in and database lock without avoidable detours. See recruitment failure before it happens
If you want a deeper, more structured framework for using lived‑experience data to de‑risk protocols, our white paper “Health equity risk in clinical research: How to see patient recruitment failure before it happens” goes further than this article.
Inside, you will find:
A breakdown of how current planning data misses health equity risk
A practical definition of lived‑experience data and how to collect it at decision‑grade quality
Detailed examples of how small protocol changes, informed by patient experience, altered recruitment and retention
A health equity risk checklist and scorecard you can apply before protocol sign‑off
You can download the Health Equity Risk Playbook to get the full set of tools and apply them to your next protocol review.
And if you would like to stay close to the practical side of this, seeing what inclusion infrastructure looks like in heart failure, metabolic disease, oncology, and beyond, you can join Unwritten Dispatchess, a weekly newsletter focused on stories from the front line of building Unwritten Health.