CTI Research Series

Responsible AI in Healthcare Needs Evidence, Not Slogans

Dr. Brendan O'Brien

Healthcare AI has no shortage of slogans. Responsible AI. Ethical AI. Safe AI. Human-centred AI. Trustworthy AI. Pick a conference. Pick a vendor deck. They all say the same words.

Slogans don't protect patients. Evidence does.

Our approach to AI governance at CTI has become much more formal over the last year, and I want to be honest about why: because it has to be. Regenemm Healthcare is not a loose collection of clever tools. It is being built as clinical infrastructure, and clinical infrastructure needs governance you can show, not governance you can recite.

Our Executive Oversight of AI policy makes the accountability lines clear. AI strategy, risk, safety, privacy and ethics sit at board and executive level. High-impact clinical, privacy or safety releases require senior approval before they go anywhere near a patient. Material changes to a model, prompt, guardrail, or supplier escalate through defined governance pathways. Risk owners exist on paper and in person. Minutes get kept. AISA approvals get logged. Release gates exist. Decision records exist. None of that is theatre — it is the audit surface that lets a regulator, a clinician or a coroner reconstruct what we did and why.

It sounds procedural. It is. Healthcare needs procedure.

The AI lifecycle process is even more explicit. Regenemm systems move through inception, design and development, verification and validation, deployment, operation and monitoring, continuous validation, re-evaluation, and eventually retirement. No stage moves forward without evidence. The required artefacts include the business case, the intended-purpose statement, the risk assessment, the architecture and data design, the model and system cards, the data governance plan, the V&V reports, the release records, the monitoring dashboards, and a decommission plan written before the thing is even live.

This is not bureaucracy for its own sake. It is how we keep magical thinking out of the room.

If a Regenemm system produces a clinical summary, you should be able to ask — and we should be able to answer — a very specific set of questions. What was the intended use? What data was the model allowed to see? Was PHI reduced before it ever hit the model? Which version of the prompt was used? Was the patient's consent active at the time? Did a clinician review and approve the output? Was the FHIR package valid? What was the hallucination risk on this output type, and how do we know? If something went wrong, can we roll back?

These are not abstract questions. They are the questions an injured patient will ask. The questions a coroner will ask. The questions a hospital executive will ask after the fact, when there is no time to invent answers. The questions a clinician will ask before they let your software near their patient list. If we can't answer them with evidence, we shouldn't be in the room.

Good AI governance is not anti-innovation. It is the condition for innovation surviving contact with healthcare.

The next era of healthcare AI won't be won by the loudest demo or the slickest pilot. It will be won by systems that can show their work — traceability, auditability, consent, monitoring, validation, and human review built in from the first sprint, not retrofitted after a near-miss. The vendors who can produce that evidence on request will earn trust. The ones who can't will lose access, slowly and then quickly.

My view on this is simple, and it sits behind every governance choice we make at CTI. If we can't evidence it, we shouldn't claim it.

That isn't conservative. That is how you build something that lasts longer than a funding round.


CTI is the AI-native parent company behind Regenemm Healthcare.

Read More