Choosing the Right AI Tool for Clinical Health Care and Research

Making informed choices about AI tools ensures that clinical care and research remain safe, ethical, and effective.

AI in Clinical Research: Higher Stakes, Unique Challenges

Clinical care and clinical research share core safeguards, but AI in research comes with its own risks. Research data, consent rules, and analytic methods can change faster than in direct patient care.

Managing Risks and Oversight

AI used for hypothesis generation, data analysis, or protocol execution can affect scientific validity, introduce bias, or fall outside Institutional Review Board (IRB) approval if not properly governed.

Principal Investigators should seek guidance from the IRB, Information Security, Privacy, and institutional AI governance teams to ensure tool selection, data handling, and analyses meet regulatory, ethical, and scientific standards.

Guidance by Context

To clarify expectations, guidance is organized by context: clinical, research, educational, and administrative users. Each group faces different risks and regulatory requirements.

Each context includes clear instructions on:

  • Permissible data types
  • Approved AI platforms
  • Required review and approval processes

Instructions include direct links to university policies, including the HIPAA Privacy Policy, Data Classification Standard, and the Security Planning Assessment process.

Key rules will be clearly stated:

  • Non-clinical tools must never be used with PHI or for clinical decision-making.
  • Any AI tool not covered by this guidance must undergo a Security Planning Assessment.
  • Use that violates this guidance may be considered a breach of university policy.

DTS AI Review Process

The YNHHS Digital and Technology Solutions (DTS) team manages the Enterprise Healthcare AI Governance review process for the use of clinical AI tools across both Health Sciences IT and YNHHS to ensure they meet clinical, privacy, and security standards. In collaboration with clinical leadership, DTS actively evaluates AI solutions (both chat-based and other platforms) for clinical use. 

Users with access to Yale University resources should also follow University guidance when using AI tools and AI agents.

For clinical AI use case implementations, please submit a ticket.

Incorporating AI Trust and Bias Frameworks

As part of this review process, DTS evaluates AI tools using established trust and bias mitigation processes derived from health system, government agency and professional society frameworks. The rapid development and integration of AI into clinical care, including predictive, generative, and emerging agentic tools, requires pragmatic, risk-proportionate approaches to the evaluation and monitoring of health care AI to ensure that AI adoption in health care is safe, effective, equitable, and sustainable, ultimately improving patient outcomes and supporting high-quality AI-enabled care.  AI implementation includes local validation, bias assessment, and post-deployment monitoring. The evaluation addresses 3 phases: pre-deployment, implementation, and post-deployment, and 4 pragmatic guiding principles: strategic alignment, ethical evaluation, usefulness and effectiveness evaluation, and financial performance, to inform health system selection, validation, deployment, and actionable monitoring of AI tools. 

By applying these frameworks during the approval process, the University and Health System help ensure that AI tools align with ethical standards and support safe, equitable use in sensitive health care and research settings.

AI can accelerate discovery when it is supported by clear oversight and disciplined research methods.

Approved University Tools

Yale provides a set of approved AI tools that have been reviewed for security, privacy, and appropriate use and are listed in the University’s AI Tool Registry. Only tools included in the registry may be used for university work, and each tool must be used within its approved purpose and data requirements.

AI Tool Registry

To improve visibility and manage risk as AI use expands, Yale will maintain a central AI Tool Registry in partnership with the Provost’s Office.

The registry will provide a shared inventory of AI tools and chatbot platforms used across the University, including:

  • Intended use cases
  • Data classification level
  • Risk assessment status
  • Primary point of contact

Before adopting an AI tool not listed in the registry, departments must submit the tool to hsit-software@yale.edu for an AI risk review. Approved tools will then be added to the registry.

The Provost’s Office will work with Information Security, Health Sciences IT, and Legal/Compliance to ensure tools are reviewed and approved before deployment. The registry will serve as a living resource to support:

  • Policy enforcement
  • Ongoing risk monitoring
  • Strategic planning for AI adoption

Approved AI Tools for Health Sciences

Yale and Yale New Haven Health System (YNHHS) support the careful use of AI and chatbot tools for clinical, educational, and research activities.

Approved use cases:

  • Summarize content
  • Draft or revise text
  • Scan and organize literature

All AI-generated outputs used in professional settings must be verified against trusted, peer-reviewed, or institutionally approved medical sources.

Unauthorized use cases:

  • Provide diagnoses
  • Recommend treatments
  • Perform risk stratification

Use of AI for these purposes requires formal approval. Please contact hsit-software@yale.edu for guidance on obtaining approval.

User requirements:

  • Validate all AI-generated content against authoritative medical guidelines
  • Ensure no PHI or sensitive data is entered into unapproved systems

Human users remain fully responsible for the accuracy, appropriateness, and regulatory compliance of any work informed by AI.

Clarity Platform

Clarity is the University-supported agentic AI platform that includes the use of GPT, Claude, and Gemini AI models. This platform can be used for research, teaching, and experimentation. There are two types of Clarity agents/chatbots: PHI approved agents and non-approved agents. If you are using PHI data, request access to a Clarity PHI agent using procedures in this section.

Important Security Notice

The Clarity Platform is not intended for clinical decision support. 

Chatbots designed for health-related data use in Clarity are available for Health Sciences community members, and can be identified by the use of the word “Health” in the name of the agent/chatbot (e.g. GPT-4o - Health). Using PHI data in these agents will only be permitted on Yale Managed devices. Use of identified data should follow the Minimum Necessary Principle and where its inclusion is appropriately governed by Research Compliance and Regulatory Affairs protocols. 

Users of the Clarity platform who wish to enter any content that contains ePHI, or use any documents that contain ePHI, must submit a request for access to hsit-software@yale.edu.

Supported Use Cases

  • Automating literature review summaries using PubMed abstracts
  • Simulating patient-provider conversations for educational purposes
  • Creating data pipeline tools that interact with synthetic clinical records
  • Generating hypotheses or refining research questions in grant preparation workflows

Where to access Clarity:

Important usage requirements:

  • Yale University faculty, staff, students, and house staff (residents) have access to Clarity
  • Clinical research affiliates or others with a primary YNHHS appointment cannot access Clarity

Data and privacy rules:

  • Do not use non-ePHI approved Clarity agents to transmit PHI data
  • Third-party applications or integrations leveraging Clarity cannot inadvertently re-identify individuals or expose data externally

Clinical use: Not Approved

  • Clarity is not authorized to provide diagnosis, treatment recommendations, or risk stratification unless formally approved by hsit-software@yale.edu

User responsibility:

Epic

Epic is making available a suite of predictive and generative AI solutions for clinicians and administrative staff, many of which have no direct patient-facing features. These include  AI-assisted responses to patient portal messages, chart summarization and burdensome administrative tasks, all within Epic workflows. In the future, Epic plans to release patient facing chatbot functions to facilitate care navigation, provider search, patient education and triage of patient queries

Important usage requirements:

  • Access to Epic data and workflows must be approved for non-clinical purposes
  • Non-production environments should be used for testing and development purposes whenever possible

Clinical use: Not approved

  • Not intended for diagnosis or treatment decisions

Abridge

Abridge provides ambient listening to assist with drafting visit notes so that clinicians can more actively engage with patients

Important usage requirements:

  • Licensed healthcare providers who have received the necessary training.

Data and privacy rules:

  • Users must adhere to applicable hospital, state and federal policies and regulations regarding the need for consent or notification, varying by practice setting, in the use of audio recording for the purposes of ambient clinical documentation. For up to date information, see space2care.ynhh.org.

TRINT AI

TRINT AI uses automated speech recognition (ASR) and natural language processing (NLP) to convert speech to text. This tool makes video and audio files searchable, editable, and shareable.

Where to access TRINT AI:

Important usage requirements:

  • Visit DISSC website (above) for access to this tool

Data and privacy rules:

  • Approved for PHI use

Approved YNHHS AI Tools

AI tools can improve productivity across YNHHS, but they must be used carefully to protect the privacy of patients and confidential information. Digital and Technology Solutions (DTS) provides guidance to ensure AI tools are used safely and responsibly.

Microsoft Copilot

Microsoft Copilot is available across the organization to help draft, summarize, organize content, and more.

Important Security Notice: Microsoft Copilot is only approved for use across YNHHS for non-clinical, non-PHI activities.

Before using Copilot, review the important information below and make sure you are logged into your YNHHS account. Verify that the green shield icon is visible before entering any information. Entering PHI or other sensitive data is strictly prohibited and may violate privacy and compliance rules.

Where to access Copilot:

  • Microsoft Teams (left-hand navigation)
  • Office 365 applications
  • Microsoft Copilot Chat

Important usage requirements:

  • You must be logged in with your YNHHS account.
  • A green shield icon should be visible, indicating you are using the YNHHS environment.

Data and privacy rules:

  • Do not enter PHI or other sensitive information.
  • Only use Copilot Chat for work. Don’t use public AI chat tools like ChatGPT or Grok.
  • Even with the green shield present, PHI entry is discouraged.

Clinical Use: Not Approved

  • Microsoft Copilot can be used to ask clinical questions but must not be used for unsupervised clinical decision-making.
  • YNHHS is actively evaluating additional AI tools designed specifically for clinical decision support.

Approved tools only:

  • Microsoft Copilot is currently the only approved chat-based AI tool at YNHHS.
  • Public AI tools (such as ChatGPT, Grok, or similar platforms) must not be used.

User responsibility:

  • Always verify AI-generated content for accuracy and completeness.
  • Users remain responsible for all outputs used in professional contexts.