Home Technology RAISE 2025 panel statement on aligning AI to clinical values

RAISE 2025 panel statement on aligning AI to clinical values

0

In September 2025, I had the privilege of participating in a symposium dedicated to the responsible and ethical deployment of AI in healthcare. The event brought together a diverse group of experts, including AI researchers, medical professionals, ethicists, and other stakeholders. The program featured a series of panel discussions and keynote presentations that explored various facets of AI integration in clinical settings.

During the symposium, I contributed to a panel focused on aligning artificial intelligence with clinical values, drawing insights from safety-critical AI applications in other domains. Beyond the lively exchange of ideas, I shared a concise reflection on the challenges and opportunities in this area. This article aims to present that perspective in detail.

Harmonizing AI with Clinical Priorities

Although my background is rooted in safe and responsible AI development rather than formal AI alignment research, many of the issues we address in our recent projects inherently involve alignment principles. For instance, in our latest work on medical dialogue systems, we grappled with questions such as: How should an AI conduct patient interviews without overwhelming them? What level of empathy should the system convey? Similarly, when designing AI-generated summaries for physicians, we debated the optimal length, detail, and structure of medical notes. These considerations essentially revolve around tuning AI behavior to reflect the values and expectations of both patients and healthcare providers.

When contemplating the question of which “clinical values” AI should embody, it is crucial to identify the stakeholders involved. Healthcare objectives typically encompass enhancing patient outcomes, improving clinician experience, promoting public health, and reducing system-wide costs. In the realm of medical AI and machine learning, alignment efforts have predominantly focused on clinicians and healthcare institutions. This focus is understandable, given that AI development teams often collaborate closely with medical professionals, and any deployed technology must integrate seamlessly with existing clinical workflows and billing systems. However, one critical group often remains underrepresented in alignment efforts: the patients themselves.

Challenges in Aligning AI with Patient Values

Aligning AI systems with patient values presents unique difficulties:

  • Limited Direct Patient Engagement: Regulatory constraints and practical barriers often prevent researchers from working directly with patients. For example, in our AMIE project, we primarily engaged with patient actors rather than actual patients, limiting authentic feedback loops.
  • Information Gaps and Knowledge Asymmetry: Patients may lack the medical knowledge necessary to articulate their preferences fully or understand the implications of certain decisions. While patients can express fundamental desires-such as empathy, clear communication, and involvement in care decisions-they may not always be equipped to judge what is medically optimal for them.
  • Diversity of Patient Perspectives: Unlike clinicians, whose training and professional standards create some uniformity, patients represent a highly heterogeneous group with varied backgrounds, experiences, and health literacy levels. This diversity complicates the task of creating AI systems that can accommodate a broad spectrum of patient values, a challenge recently addressed under the concept of “pluralistic alignment.”

Despite these obstacles, prioritizing patient-centered alignment remains essential. Delivering superior care and enhancing patient experience lie at the core of healthcare’s mission. Moreover, focusing on patient values can drive broader adoption of AI technologies and indirectly benefit other healthcare goals. Notably, recent advances in AI have opened new possibilities for patient-facing applications, where alignment with patient values is not just beneficial but critical.

Defining the Values for AI Alignment

Identifying whose values to align with is only part of the equation; we must also clarify which specific values to prioritize. In AI ethics, distinctions are often made between preferences, intentions, instructions, and interests-each of which can be explicit or implicit. Patients may be able to communicate some preferences clearly, but others remain unspoken or unconscious.

Ultimately, I advocate for aligning AI systems with the broader, albeit somewhat abstract, notion of the “patient’s best interest.” This concept encompasses not only expressed preferences like empathy and shared decision-making but also the AI’s capacity to evaluate and promote long-term patient well-being and health outcomes.

Insights from Broader AI Safety Research

The healthcare AI community stands to gain valuable lessons from the wider field of AI safety. Researchers in AI safety have identified a complex four-way relationship involving the user, the AI system, the developer, and society at large. This framework has led to a clearer understanding of what constitutes misalignment and has driven the creation of comprehensive risk assessment models. These models help identify, categorize, and mitigate potential hazards associated with AI deployment.

Importantly, AI safety research addresses a broad spectrum of challenges, from immediate operational risks to long-term existential concerns. The generality of these frameworks offers a foundation that can be adapted and specialized for healthcare applications, where the stakes and contexts differ but the underlying principles remain relevant.

Another key takeaway from AI safety is the recognition that users often form complex, nuanced relationships with AI systems. This contrasts with the prevailing view in healthcare, where AI is typically regarded as a mere tool integrated into clinical workflows. However, patients interacting with AI-powered health assistants may perceive these systems as distinct entities, developing unique emotional and cognitive bonds. Acknowledging this dynamic is crucial for designing AI that truly aligns with patient values and expectations.

Exit mobile version