Cohere Labs: Canadian Research Grant Program

Research Grants
Cohere Labs research grants are designed to support academic partners in Canada who are conducting research with the goal of releasing a peer-reviewed scientific artifact. Our program provides academic partners, developers, researchers, and other members of our community with subsidized access to the Cohere API. These grants provide academic researchers and developers with subsidized access to the Cohere API to support their research into advancing safe, responsible LLM capabilities and applications.
About the program
Cohere Labs Canadian Research Grants support researchers who are advancing the field of machine learning and natural language processing, or applying large language models to other research fields or public benefit projects.
We are proud to support projects across Canada in a range of research areas, including language model safety, applications for prosocial goals, multilingual capabilities - including all Canadian official languages - values alignment, and more. These areas closely align with our lab's commitment to advancing AI in a safe and responsible manner.

Access to the resources needed to conduct machine learning research, such as compute and state-of-art large language models (LLMs), is not always easily available for researchers. In an effort to help narrow this gap, Cohere Labs launched our Research Grant program in July 2023.
Access to the resources needed to conduct machine learning research, such as compute and state-of-art large language models (LLMs), is not always easily available for researchers. In an effort to help narrow this gap, Cohere Labs launched our Research Grant program in July 2023.
250
Research Grants Awarded
$475k CAD
API Credits Granted
35
Countries Represented Among Recipients
100
University Affiliations
Projects by Our Grantees
Building LLM tools for teachers

Jussi Jauhiainen and Agustín Garagorry, University of Turku, Finland
Teachers and learners worldwide are exploring AI tools in education, but experimentation is often limited by a lack of evidence on their effective and responsible use.
Jussi, Agustín, and their team are using LLMs to adapt learning materials to individual learners, creating a more tailored, incremental learning experience. Their recent research compares LLMs for student performance evaluation, emphasizing the importance of choosing the right tools, understanding risks, and supporting informed decision-making. Currently working in Finnish, they plan to expand to English, Spanish, and several African languages with partners.
The Cohere Labs research grant enabled them to test Cohere’s models, broadening their evidence base and expanding their study.

Blog Posts
Supporting Researchers to Use LLMs
Announcing the Cohere Labs Research Grant Program
Frequently Asked Questions
What types of projects do these grants support?
Language model safety, including bias, explainability, hallucinations, toxicity, and adversarial testing
Language model applications for prosocial goals in fields such as education, climate science, content moderation, healthcare, law, history, and social science
Language model capabilities, such as improving information retrieval accuracy or model efficiency
Multilingual capabilities, such as increasing language model performance and safety in languages beyond English
Values alignment exploring how to ensure generative AI model behavior meets people’s expectations, preferences and values
How does the Canadian grants program differ from your standard grants program?
01:
The Canadian grants program functions similarly to our standard grants program, however when assessing applications for the Canadian program, we take into prioritise criteria such as regional focus, relevance to Canadia priorities. We also seek opportunities to make fruitful connections between researchers in the Canadian program to support cross-institutional collaborations and networks.
What do I need to know if my research project artifact is complete or ready to be published?
Often, we receive the question of how should the grants be acknowledged. If you choose or want to acknowledge the grant program, we have drafted a simple message below that you are welcome to use and adapt.
“This work was supported by compute credits from a Cohere Labs Research Grant, these grants are designed to support academic partners conducting research with the goal of releasing scientific artifacts and data for good projects.”
Is it necessary for me to share my research results with you and obtain your approval prior to submitting my work for publication?
We often get asked about the process for sharing experiment results with us before paper publication. Generally, we do not require an approval process for research findings, except in cases involving AI safety or responsible AI. However, we welcome an update about what research you did with the grants as we hope to spotlight some of the research contributions that come out of the program.
If your work pertains to AI safety, you are expected to unconditionally and immediately disclose any vulnerabilities found through your research and testing, by notifying safety@cohere.ai and sharing any research outputs (papers, datasets etc.) at least 4 weeks before publication by notifying labs@cohere.com
What if I need additional credits to complete my research project?
If you use all of the credits provided for your project and would like to request additional funds, we encourage you to reach out via email to brittawnya@cohere.com
How can I monitor the balance of my grant?
You can monitor the balance of your grant via the Billing tab of your Cohere dashboard. Your grant credit will be applied to any usage of our models with a Production API key. If you have not already done so, you can create a Production API key in the API Key tab of your dashboard.
Can I invite colleagues working on my project to use the credits?
Yes - to invite and manage teammates and collaborators, please visit the Team tab of your dashboard.

Cohere Labs
Cohere Labs is Cohere's non-profit research lab that seeks to solve complex machine learning problems. We support fundamental research that explores the unknown, and are focused on creating more points of entry into machine learning research.