Blog

January 16, 2025

Jim Wagner

The End of ‘No AI’ Clauses in Clinical Research Agreements: Why Resistance is Both Futile and Counterproductive

Last month, I shared a LinkedIn post highlighting a “No AI” clause that a major pharmaceutical company had included in a clinical research agreement with a hospital. The post garnered significant attention—over 20,000 views and over one hundred comments. It’s time for an update. 

Yesterday, Google announced that its advanced Gemini AI is now embedded for all Google Workspace users by default, at no extra charge. This integration includes email, calendar, documents, and spreadsheets. It’s a watershed moment, underscoring that advanced AI tools will now be integral to every business application – not optional. 

Yet, many large organizations, including some pharmaceutical companies, continue to impose broad “No AI Allowed” clauses in their contracts, which prohibit any use of AI technologies. This disconnect isn’t just ironic—it actively undermines healthcare innovation and, ultimately, patient care. It highlights the need for a new approach: one that prioritizes responsible AI use over futile attempts to ban it.

The Reality Check: AI is Already Everywhere

The Google Workspace announcement underscores a critical reality: AI is now woven into the fabric of standard business tools. From Microsoft Office to Docusign, from Salesforce to Adobe, AI capabilities are no longer optional add-ons but core functionalities. When basic tools like email and document editing incorporate AI, complying with broad “No AI” clauses becomes technically impossible. Operating a modern business without interacting with AI in some form is simply not feasible.

One-Sided AI Clauses: A Missed Opportunity for All

Organizations requesting “No AI” clauses often do so in ways that create an unfair playing field. A typical clause might read:  “Counterparty will not store, process, review, or transmit any Confidential Information using artificial intelligence, machine learning models, or similar technologies.”  

Ironically, the same companies imposing these restrictions are often heavy users of AI themselves. This one-sided approach enables them to reap AI’s benefits while contractually denying them to their partners.  

More importantly, the practical consequences of such clauses are significant. AI improves efficiency and effectiveness across tasks. In healthcare, prohibiting a hospital from using AI tools to review and execute clinical trial agreements could delay trial startups, slowing patient access to potentially life-saving treatments.

The Path Forward: Practical Solutions and Responsible AI

Rather than futilely banning AI, organizations should focus on governance that addresses legitimate concerns. Life sciences contracting professionals can draft clear, targeted clauses that “surgically” manage specific risks around data protection, model training, and intellectual property ownership, rather than impose impractical outright prohibitions.  

Beyond contractual measures, organizations should evaluate the platforms they use to collaborate. Modern AI tools can offer robust security and confidentiality features that protect all parties while enabling productivity gains.  

For example, The Contract Network platform exemplifies a balanced approach. It provides AI capabilities to all parties while ensuring secure, confidential data handling with strict controls to prevent unauthorized training or data leakage.

Time for Change

The brief era of blanket “No AI” clauses must end. These clauses are unilateral, impossible to enforce, and deny organizations vital benefits that AI can responsibly unlock. Instead, we need practical solutions that embrace AI’s potential while addressing legitimate security and privacy concerns. The technology is here—now it’s time for our contracts to catch up with reality.