Guidance by Context
To clarify expectations, guidance is organized by context: clinical, research, educational, and administrative users. Each group faces different risks and regulatory requirements.
Each context includes clear instructions on:
- Permissible data types
- Approved AI platforms
- Required review and approval processes
Instructions include direct links to university policies, including the HIPAA Privacy Policy, Data Classification Standard, and the Security Planning Assessment process.
Key rules will be clearly stated:
- Non-clinical tools must never be used with PHI or for clinical decision-making.
- Any AI tool not covered by this guidance must undergo a Security Planning Assessment.
- Use that violates this guidance may be considered a breach of university policy.
DTS AI Review Process
The YNHHS Digital and Technology Solutions (DTS) team manages the Enterprise Healthcare AI Governance review process for the use of clinical AI tools across both Health Sciences IT and YNHHS to ensure they meet clinical, privacy, and security standards. In collaboration with clinical leadership, DTS actively evaluates AI solutions (both chat-based and other platforms) for clinical use.
Users with access to Yale University resources should also follow University guidance when using AI tools and AI agents.
For clinical AI use case implementations, please submit a ticket.
Incorporating AI Trust and Bias Frameworks
As part of this review process, DTS evaluates AI tools using established trust and bias mitigation processes derived from health system, government agency and professional society frameworks. The rapid development and integration of AI into clinical care, including predictive, generative, and emerging agentic tools, requires pragmatic, risk-proportionate approaches to the evaluation and monitoring of health care AI to ensure that AI adoption in health care is safe, effective, equitable, and sustainable, ultimately improving patient outcomes and supporting high-quality AI-enabled care. AI implementation includes local validation, bias assessment, and post-deployment monitoring. The evaluation addresses 3 phases: pre-deployment, implementation, and post-deployment, and 4 pragmatic guiding principles: strategic alignment, ethical evaluation, usefulness and effectiveness evaluation, and financial performance, to inform health system selection, validation, deployment, and actionable monitoring of AI tools.
By applying these frameworks during the approval process, the University and Health System help ensure that AI tools align with ethical standards and support safe, equitable use in sensitive health care and research settings.