Skip to main content

Rethinking 'human in the loop' as AI scales across healthcare

Julia Zarb, principal and founder of Blue x Blue, discusses her upcoming HIMSS26 talk, exploring "human in the loop" AI practices and workflows that support human decision-making.
By Jessica Hagen , Executive Editor
Julia Zarb, principal and founder of Blue x Blue

Photo courtesy of Julia Zarb

Julia Zarb, principal and founder of Blue x Blue, teases her upcoming 2026 HIMSS Global Health Conference & Exhibition in March, where she will discuss a growing disconnect between how "human in the loop" is described in healthcare AI and how it functions in practice. 

MobiHealthNews: Can you give us a short overview of what you will discuss?

Julia Zarb: HITL is becoming a default reassurance in healthcare AI — appearing in policies, governance frameworks and implementation plans — usually meaning a person is expected to review, accept, modify or reject an AI-influenced recommendation. But its practical meaning is still emerging. As AI spreads across provider, payer and pharma workflows, a gap is forming around who is actually accountable at the decision point, what evidence and constraints they can see and what gets recorded when decisions are questioned.

The urgency is that this is starting to happen at scale, under real operational pressure. Consider the busy clinician, nurse or manager asked to make a call quickly with partial context, limited visibility into compliance parameters or how a recommendation was produced, and unclear paths to pause or escalate. Under pressure, review can become a screen-level action rather than an informed decision – leaving organizations with fragmented oversight and inconsistent acceptance criteria across teams. As AI influences decisions across care, claims and compliance, the "why" behind those decisions often remains scattered across emails, chat threads and memory. When an audit, denial, adverse event or headline hits, organizations can struggle to reconstruct answers to basic questions: who decided what, based on which evidence, under which constraints?

MHN: Why is the question of the "human in the loop" so important when it comes to AI technology?

Zarb: Because HITL can create a mismatch between what we say humans are doing and what they can realistically do in workflow. Clinicians, claims reviewers and quality managers may be expected to "use judgment" on AI-influenced outputs without the context they need — such as policy constraints, risk thresholds, comparative evidence — without visibility into performance drift and without a practical way to disagree or escalate. Surveys already suggest many physicians expect they will be held accountable for AI-related errors, even when they had no role in selecting or configuring the system.

This is also where the "learned intermediary" idea becomes strained. It assumes a human intermediary can absorb responsibility by applying professional judgment, but that only works if the workflow supports that judgment with time, evidence and clear decision rights. If those conditions aren't present, oversight becomes symbolic: The human is "in the loop," but the loop itself may not be safe, consistent or usable. The core issue isn't whether a human is involved; it's whether the workflow is designed so the human can realistically do the job they're being assigned.

MHN: What do you hope attendees learn from your talk?

Zarb: A simple shift in how to think about decision governance and compliance: stop asking "Do we have HITL?" and start asking, "Have we designed for the human in the loop?" 

What is the person expected to do at the decision point? How much time do they have? What information and constraints are visible? What happens if they disagree or need to escalate?

The talk draws on current approaches to evaluating HITL, evolving policy directions and operational cases. Attendees will come away with a clearer understanding of immediately relevant red flags: HITL that's just a click, no clear decision owner, limited visibility into constraints or policy conflicts, no plan for monitoring or drift, and no reliable record of how AI influenced a decision.

Julia Zarb's session "Wait, I'm the Human in the Loop?" is scheduled for Tuesday, March 10, from 10:15 a.m.-11:15 a.m., in Palazzo I Level 5 at HIMSS26 in Las Vegas.