Large language models help us build powerful systems that augment human capabilities, but they can also be used in ways that cause harm. We believe language model providers must develop their technologies responsibly. This means proactively working to build safer products and accepting a duty of care to users, the environment, and society.
We’ve invested in technical and non-technical measures to mitigate potential harm and make our development processes transparent. We've also established an advisory Responsibility Council empowered to inform our product and business decisions.
We are excited about the potential for algorithmic language understanding to improve accessibility, enable human computer interaction, and allow for broader human-to-human dialogue. If you want to use our API to help create a better world, or stay informed of new developments, let us know.
Model the world as we hope it will become.
Anticipate risks and listen to those affected.
Build in mitigation efforts commensurate with expected and actual impacts.
Continually assess the societal impacts of our work.
We believe that no technology can be made absolutely safe; machine learning is no exception. This requires anticipating and accounting for risks during our development process. We run adversarial attacks, filter our training data for harmful text, and measure our models against safety research benchmarks. We also evaluate evolving risks with monitoring tools designed to identify harmful model outputs.
We recognize that misuse of powerful language models will disproportionately impact the most vulnerable, so we aim to balance safety considerations and equity of access. This is an ongoing process. As we release early versions of our technology, we’ll work closely with our partners and users to ensure its safe and responsible use.
We require our users to abide by Cohere’s Usage Guidelines. Access will be revoked if these terms are not followed. If you spot the Cohere Platform being used in a harmful or otherwise unproductive way, please report it to us.