CTI Research Series

Consent Is Not a Checkbox

Dr. Brendan O'Brien

If healthcare AI is going to earn trust — and right now, that trust is far from given — consent has to become more than a tickbox at the bottom of an intake form.

Consent has to be understandable. It has to be specific. It has to be respected technically, not just legally. And when it is withdrawn, the system has to know what to stop doing.

In our human ethics preparatory work for the Regenemm Communication Platform Evaluation, the participant information and consent form was deliberately broken into separate parts. Primary study participation. Voice recording. AI processing. Optional fitness tracker or biosensor data. De-identified research use. Future contact. Six distinct decisions, not one bundled "I agree."

That separation isn't a procedural nicety. It is the heart of the thing.

A patient may be perfectly comfortable with their doctor using AI-assisted documentation, and not at all comfortable sharing wearable data with a research database. Another might be happy to be recorded for their care, and decline future research use entirely. Another might want to know exactly what the AI generated and how it was reviewed before they agree to any of it. All of these are reasonable positions held by reasonable people. A serious system supports them. A lazy system hides them under one button.

The plain-language explanation in the ethics material matters too. It says, in words a patient can read, that the system listens to clinical discussion, converts speech to text, uses AI to organise information, and creates structured reports. It states clearly that doctors always review and approve content before it enters the medical record.

That should be the standard for healthcare AI communication, not the exception.

Patients don't need marketing language. They need clarity. They should know when they are being recorded, in plain English, before it begins. They should know what the AI does and, just as importantly, what it doesn't do. They should know whether the data is identifiable or de-identified, and what those words actually mean. They should know who can see it. They should know whether wearable data is optional. They should know — and we should never let them doubt this — that their care is not affected if they decline.

In Regenemm's clinical workflows, consent is also a technical control, not just a legal artefact. The training model is explicit on this point: recording does not begin until consent is active and the patient's identity has been matched. If consent is withdrawn, processing halts and downstream generation is blocked. The architecture refuses to do work the patient has just said no to.

That, to me, is the only honest way to build clinical AI. Consent has to travel with the data, in real time, and the system has to treat that consent as binding on its own behaviour — not just on the paperwork.

The industry talks a lot about privacy by design. In healthcare AI, we need consent by design as well. Not as a form. Not as a footer. As architecture.

If we get that right, the rest of the trust conversation gets easier. If we get it wrong, no amount of governance language is going to save us — or, more importantly, the patient.


CTI is the AI-native parent company behind Regenemm Healthcare.

Read More