Key Data Protection Terms in AI Procurement Contracts
This post is first in a series of recommendations on how procurement can shape and structure AI platform licensing contracts.
Artificial intelligence (“AI”) use and implementation is exploding across the business landscape. As enterprises quickly move forward to implement and integrate AI technology, procurement’s role is no longer solely focused on software licensing pricing or performance. SaaS-related contract templates do not contemplate the complexities of licensing AI technology and so procurement organizations must evolve their mindset on how to negotiate AI contracts.
Data is the key foundational element for AI, for without data, AI cannot function. Large Language Models (“LLM”) form the basis for training the AI technology, but customer data is used to fine-tune the models and make the data even more usable for the customer. Because of this fact, procurement must ensure that the AI licensing contract contains robust safeguards around data privacy, data security and intellectual property (“IP”). Insufficient terms and conditions regarding this foundational element put the enterprise at risk of running afoul with regulations like the European Union Artificial Intelligence Act (“EU AI Act”).
Below are recommended data protection terms you should address to secure your organization's interests.
-
1. Data Use and "No Training" Guarantees
The most significant shift from SaaS contracts is how vendors use your data once it enters their system. Recommendations for training AI language models include:
-
Restricting Model Training: Vendor contracts default to using customer data to improve their LLMs. You should explicitly negotiate "no training" or "no fine-tuning" clauses that prevent your proprietary or personal data from being used to benefit the vendor's other clients or products.
-
Permitted Use: Define exactly what the vendor can do with your data. Ideally, use should be restricted solely to providing the specific service to you. If, however, a customer permits the use of data to train the vendor’s AI models, then the contract should require that the data be de-identified and aggregated first.
Why it matters: If done without consent, using customer data can expose parties to potential compliance violations. Clauses around model training and refinement are necessary to prevent the exposure of confidential, proprietary information and trade secrets.
-
2. Defining Input vs. Output Ownership
Employees, customers and others that interact with AI technology will provide “inputs” which then generate “outputs” from the AI platform. AI contracts must clearly define who owns what at every stage of the lifecycle. Procurement should seek to define the following:
-
Inputs: These are the prompts, datasets, or documents you provide. Contracts should state that the customer retains full ownership and that the vendor only has a limited license to process them.
-
Outputs: The results generated by the AI should ideally be owned by the customer, especially if they are work products or deliverables. Typical vendor position is that they own license to the outputs.
-
Derivative Data. This data speaks to how a customer uses the tool (e.g., frequency, error logs, model embeddings). Vendors typically own this, but customers should limit the use to service improvements only.
-
Audit Rights. Secure rights to audit the vendor’s data-use logs to ensure your proprietary information is being handled in the ways defined in the contract.
Why it matters: AI agreements should specifically address this issue, or risk difficulty commercializing outputs and even possible infringement of the IP of third parties.
-
3. AI-Specific Security and Data Privacy Controls
Standard encryption is no longer enough. AI vendors should be compelled by customers to use stronger encryption (e.g., AES-256) and provide attestations and audit results via SOC 2 Type II reporting. To promote security and data privacy, procurement should seek to include the following in the contract:
-
Environment Isolation: Seek "dedicated tenant" rather than shared infrastructure to minimize the risk of data "bleeding" between customers. Dedicated tenant environments come with a higher licensing cost but can be worth the additional cost as data security increases.
-
Audit Rights: Ensure you have the right to audit the vendor's data handling practices and request recent SOC 2 Type II reports to verify their security claims.
-
Require specific technical and operational protection for any confidential information. These protections should apply not only directly within the AI platform but also for any connected environment (such as data storage systems). In addition to the above, it is recommended that procurement also considers ones unique to AI, such as:
-
Requiring vendors to provide specialized AI monitoring tools that detect risks or vulnerabilities and report on these to the customer at regular intervals;
-
Technical controls that prevent AI systems from memorizing or reproducing specific data types
Why it matters: For customers (and vendors for that matter) data privacy is a serious matter, and violations can carry significant regulatory fines. Provisions regarding data privacy in AI contracts are essential to ensure that each party understands their responsibilities for data protection.
-
4. Data Retention and Deletion
-
Retention periods should be defined, with secure deletion or return of data required upon termination and supported by written certification.
-
Give vendors 30 days to delete or return data. If requiring deletion, then require the vendor to provide a certificate of deletion. Force the vendor to warrant that they have deleted the data
-
To that customer outputs be returned to the customer in a "machine-readable format" to prevent vendor lock-in.
Why it matters: Without specific terms in AI agreements regarding exit strategies, data and models may continue to be used long after an agreement has ended. This can expose a party to compliance risks and IP infringement. AI contracts should provide for audits or verification of deletion to ensure the relationship is fully disengaged.
-
5. Regulatory Compliance and Transparency
With laws like the GDPR and the California Consumer Privacy Act (CCPA) becoming more stringent regarding automated decision-making, transparency is a contractual must.
-
Disclosure of AI Use: If AI technology is embedded within a vendor’s software, then the vendor should proactively disclose where and how AI is used in their services, particularly if it touches sensitive data.
-
Human-in-the-Loop:For high-stakes decisions (like hiring or financial approvals), contracts should mandate human oversight mechanisms to review and correct automated outputs.
-
6. Liability and Indemnification
Outputs from AI systems can produce biased, inaccurate, or infringing content. Standard liability caps—often limited to one month's fees—are frequently insufficient for the scale of potential AI risks. Outside of data, liability and indemnification will be one of the most hotly contested portions of the contract. We recommend the following in an AI contract:
-
Uncapped Indemnity: Seek uncapped indemnification for third-party IP infringement claims (e.g., if the AI was trained on copyrighted material without a license) and breaches of confidentiality.
-
Vendors will undoubtedly balk at providing any type of uncapped liability. A suitable alternative could be “Super Caps”. These can be defined as:
-
Because IP claims can be catastrophic, customers should seek "unlimited liability" or a higher "supercap" for third-party IP infringement claims, rather than a standard 12-month fee cap.
-
Bias and Errors: Negotiate specific liability for damages resulting from discriminatory outputs or regulatory violations caused by the vendor's model.
Why it matters: Without strong indemnification clauses, it can be unclear who is ultimately responsible for damages resulting from AI IP infringement or other errors and misuse.
Final Thought
AI procurement is as much about risk management as it is about innovation. By embedding strong data protection and risk controls directly into procurement contracts, organizations can adopt AI responsibly while reducing legal and regulatory exposure.
Reach out to us today to discuss how we can support your enterprise in the negotiations of AI contracts. Our global IT category managers and IT procurement strategy consultants are current and former category practitioners who have experience in negotiating these types of contracts.