Last Update: October 8, 2024
Cohere maintains robust controls to protect enterprise data and respect our enterprise customers’ rights regarding their data.
Cohere offers several deployment solutions to meet the diverse needs of enterprise customers. Bring Cohere models to your data with private deployments and deployments on third-party cloud AI/ML platforms, or use the Cohere SaaS Platform to leverage Cohere-managed infrastructure.
In third-party cloud AI/ML platforms and private deployment solutions, Cohere does not receive any customer inputs (prompts) or outputs (generations).
Keep reading to learn more about our robust enterprise data controls in the Cohere SaaS Platform.
Sharing your data for training on the Cohere SaaS Platform helps improve our models for you, but if you want to opt out, we make it easy.
You can opt out from your prompts, generations, and finetune data being used to train Cohere models in your dashboard settings at any time.
If you upload content from third-party applications to the Cohere SaaS Platform, like Google Drive, Cohere does not use any of this content, or your prompts or generations about that content, to train our models. No action is needed on your part to opt out.
We automatically log and monitor the use of our SaaS Platform for compliance with our customer agreements, Usage Policy, and for security risks to our services.
If we detect possible misuse of our SaaS Platform, our in-house custom classifiers and prompt injection guard filters (which label potentially violative prompts) trigger additional threat detection efforts to enforce our customer agreements, including our Usage Policy, and secure our services from misuse. Our safety and security teams may review user prompts, generations, and logs for these purposes. Our safety team may also aggregate flagged prompts and generations after removing customer identifiers to evaluate our models’ ability to detect safety issues and enforce our Usage Policy.
We apply the following data handling and retention controls on the SaaS Platform:
We automatically delete logged prompts and generations after 30 days, unless we need it to comply with a legal requirement or customer contract, or unless your usage is flagged as potentially violating our terms, including our Usage Policy (e.g. abuse or misuse of our services). Data you allow us to use for training purposes is stored and handled in accordance with our agreement with you.
You control retention of conversation history and finetune data sets. You can delete chat history and finetune datasets directly in your account, and deleted chat histories and finetune datasets are purged from Cohere’s backend systems after 7 days.
We filter and strip common types of personal information from prompts and generations before they are used for training Cohere models (if you are opted in).
If your usage is flagged as potentially violating our terms, including the Usage Policy, we may retain and review the flagged user prompts and associated logs to enforce our policies. We may also aggregate flagged prompts and generations after removing customer identifiers to evaluate our models’ ability to detect safety issues and enforce our Usage Policy.
If you have been approved for zero data retention, Cohere does not log any customer prompts or generations. See our FAQ below for more information.
Cohere also collects and uses certain usage data that doesn’t identify customers like frequency and duration of usage, features accessed, user preferences, and aggregate counts of input prompt tokens to understand how our services are used, and improve performance.
We support our enterprise customers’ privacy and data security compliance needs by offering multiple deployment options so customers can control access to data and personal information under their control.
Seamlessly complete your privacy and security compliance reviews by visiting Cohere’s Trust Center where you can request a copy of our SOC 2 Type II Report and review our privacy documentation as well as other compliance resources.