Blog

September 18, 2025

Jim Wagner

Responsible AI in Research: What Changed and What It Means for You

New guidance from The Joint Commission and the Coalition for Health AI (CHAI) outlines a practical path for the responsible use of artificial intelligence (AI) in healthcare and research. This is the first coordinated, cross-sector effort to define what “responsible AI” actually looks like in a clinical environment—from how AI is deployed, to how it’s monitored, governed, and contracted. The full guidance is here.

What Was Introduced—and By Whom

The Joint Commission, the nonprofit accreditor for most U.S. hospitals, and the Coalition for Health AI (CHAI) have released a joint framework called the Responsible Use of AI in Healthcare (RUAIH™). It outlines seven key elements that organizations should implement when using AI tools in patient care, research, or related administrative services.

While this initial release is a high-level policy framework, it lays the foundation for future playbooks and a voluntary certification program—making this a pivotal step toward shared standards across the healthcare and research landscape.

What’s the Goal?

In plain terms: To help healthcare and research organizations use AI safely, ethically, and effectively.

That means reducing risk, improving transparency, and ensuring patients (and staff) are protected as AI becomes more deeply embedded in clinical and research workflows.

What This Means for You and Your Team

To align with this guidance—and prepare for what’s coming next—organizations should take the following actions:

  1. Create best practices and documentation
    Document how you use AI tools: what they do, who’s responsible, what data is involved, and how decisions are made. Keep it clear, short, and updatable.
  2. Implement those practices across the board
    Train your team. Update your SOPs. Use checklists. Make it routine—not reliant on one compliance lead or point person.
  3. Ensure compliance by you and your business partners
    Your CROs, vendors, and sites should follow the same standards. That means consistent contracting, monitoring, and shared governance.
  4. Undertake appropriate diligence before new relationships
    Ask vendors and collaborators for a plain-English summary of each AI tool: what it does, where it runs, what data it uses, who maintains it, and what its known limitations are.
  5. Document these practices in contracts
    Don’t rely on policy alone. Capture key representations and warranties, data use limitations, breach protocols, and change control in your CTAs, BAAs, and DUAs.
  6. Update your process regularly
    AI evolves quickly. So should your oversight. Use change logs, periodic reviews, and clear triggers for reevaluation when models, vendors, or uses change.

Contracting Checklist: What to Include

If you’re involved in clinical research contracting—especially as a sponsor, CRO, or site—here are the key areas addressed by the Joint Commission/CHAI guidance:

  • Data use and privacy: Limit to the minimum necessary. Prohibit re-identification. Define what’s allowed for training or commercialization.
  • Transparency and consent: Define what must be disclosed to patients and staff when AI influences outcomes or workflow.
  • Validation and monitoring: Require clear documentation of purpose, limitations, testing, and who monitors performance and bias.
  • Security and incident response: Set standards for encryption, access controls, and 72-hour breach notifications.
  • Change control and versioning: Require notice for AI model updates, testing requirements before deployment, and rollback procedures if updates cause problems.
  • Governance and decision-making authority: Define who has final authority over AI tool selection, configuration, and discontinuation. Establish required expertise on oversight committees and escalation procedures.
  • Training and education obligations: Define specific training requirements before AI tool use, ongoing education responsibilities, and cost allocation across parties.
  • Audit rights and compliance monitoring: Include regular compliance audits, right to inspect AI documentation, third-party audit rights, and remediation timelines.
  • Bias assessment and remediation: Require specific testing for study population representativeness and procedures for addressing bias discovered mid-study.

Beyond the guidance requirements, other common contractual safeguards should also be considered, including:

  • Performance standards and service level agreements
  • Intellectual property rights in AI outputs
  • Vendor flow-down obligations
  • Risk allocation for AI-related issues
  • Data location and retention terms
  • Cross-jurisdictional compliance requirements

This Is What We Do Every Day

At The Contract Network, we work with sponsors, CROs, and research sites to help them reach agreement faster on the issues that matter—especially as new technologies like AI introduce novel risks, responsibilities, and regulatory expectations.

Our platform helps teams:

  • Navigate complex issues like data rights, subject injury, and AI oversight
  • See how others have handled similar challenges through structured insights and data
  • Collaborate directly and transparently on the terms that matter most