Background image for aesthetic purposes

Cohere For AI

Cohere For AI is Cohere's research lab that seeks to solve complex machine learning problems. We support fundamental research that explores the unknown, and are focused on creating more points of entry into machine learning research.

Background image for aesthetic purposes

Fundamental research lab

We work at the frontier of AI progress with the goal of solving cutting edge scientific problems. We see contributions to traditional conferences and publications in journals as an important part of our work, but also support efforts that go “beyond the research paper” and encourage scientific communication through different mediums. We drive the creation of new research spaces and breakthroughs that changes where, how and by whom research is done. We believe that technology is powerful, and empowering different perspectives ensures responsible innovation.

Open Science Initiative

We’re not just another research group. We are a hybrid lab with both a dedicated research staff and support for open science initiatives. We collaborate openly with independent researchers all over the world to conduct top-tier ML research.


Our open science research community is a space where researchers, engineers, linguists, social scientists, and lifelong learners connect and collaborate with each other. We come together from over 100 countries around the world and support large and small scale research collaborations.

Our models

Featured image for article

State of the Art, Accessible Research LLM

Aya Expanse - 8B

Featured image for article

State of the Art Research LLM

Aya Expanse - 32B

Featured image for article

Massively Multilingual Research LLM

Aya

Featured image for article

MODEL WEIGHTS FOR DEMOCRATIZING RESEARCH ACCESS

C4AI Command R - 104B

Featured image for article

MODEL WEIGHTS FOR DEMOCRATIZING RESEARCH ACCESS

C4AI Command R - 35B

Our papers

Featured image for article

M-RewardBench: Evaluating Reward Models in Multilingual Settings

In this work, we conduct a systematic evaluation of several reward models in multilingual settings.

Featured image for article

Mix Data or Merge Models? Optimizing for Diverse Multi-Task Learning

In this work, we explore model merging in a diverse multi-task setting, combining safety and general-purpose tasks within a multilingual context. Overall, our comprehensive study of merging approaches provides a useful framework for building strong and safe multilingual models.

Featured image for article

Nexus: Specialization meets Adaptability for Efficiently Training Mixture of Experts

In our latest work, we ask “How to make Mixture-of-Expert models more efficient, specialized, and adaptable at once?” We introduce Nexus, which enables specialization and adaptability within the efficient upcycling framework.

Featured image for article

Multilingual Arbitrage: Optimizing Data Pools to Accelerate Multilingual Progress

Can you surpass individual model performance by sampling parts of the distribution strategically from a pool of models? We introduce “multilingual arbitrage” to describe capitalizing on performance variations to produce large gains in performance.

Featured image for article

To Code, or Not To Code? Exploring Impact of Code in Pre-training

Including code in the pre-training data mixture, even for models not specifically designed for code, has become a common practice in LLMs pre-training. e ask "what is the impact of code data used in pre-training on a large variety of downstream tasks beyond code generation".

Featured image for article

Consent in Crisis: The Rapid Decline of the AI Data Commons

General-purpose AI systems are built on massive swathes of public web data, assembled into corpora such as C4, RefinedWeb, and Dolma. To our knowledge, we conduct the first, large-scale, longitudinal audit of the consent protocols for the web domains underlying AI training corpora.

Featured image for article

How Does Quantization Affect Multilingual LLMs?

Quantization techniques are widely used to improve inference speed and deployment of large language models. While a wide body of work examines the impact of quantized LLMs on English tasks, none have examined the effect of quantization across languages.

Featured image for article

RLHF Can Speak Many Languages: Unlocking Multilingual Preference Optimization for LLMs

We introduce a novel, scalable method for generating high-quality multilingual feedback data to balance data coverage. We establish the benefits of cross-lingual transfer and increased dataset size in preference training.

Featured image for article

LLM See, LLM Do: Guiding Data Generation to Target Non-Differentiable Objectives

Our work exhaustively characterizes the impact of passive inheritance of model properties by systematically studying the consequences of synthetic data integration.

Our programs

Advancing the NLP space through our programs.

Icon for Introducing Aya

ACCELERATING MULTILINGUAL AI THROUGH OPEN SCIENCE

Introducing Aya

About

Aya is a global initiative led by Cohere For AI to advance the state-of-art in multilingual AI and bridge gaps between people and cultures across the world. An open science project to create new models and datasets that expand the number of languages covered by AI, Aya involves over 3,000 independent researchers across 119 countries.

Icon for Scholars program

Exploring the unknown together

Scholars program

About

Our Scholars Program provides the opportunity to work alongside some of the best research and engineering experts in the world. We have created an open and supportive environment that provides an alternative point of entry into machine learning research.

Icon for Research grant

academic support

Research grant

Benefits

Cohere For AI research grants are designed to support academic partners who are conducting research with the goal of releasing a peer-reviewed scientific artifact. Our program provides academic partners, developers, researchers, and other members of our community with subsidized access to the Cohere API.

Past events and videos

Research is inherently a human endeavor, and our event series provide insights from beginning to breakthrough.

Featured image for article

Video

Cong Lu: The AI Scientist

Featured image for article

Video

Fireside Chat: Max Welling

Featured image for article

Video

C4AI Expedition Aya - Closing Ceremony

Featured image for article

Video

AI & Technical Governance: Saffron Huang and Tina M. Park, PhD

Featured image for article

Video

Panayiotis Panayiotou: Curricula for Learning Robust Policies...

Featured image for article

Video

Arthur Conmy: Mechanistic Interpretability Research Frontiers

Meet our research team

Our staff brings together machine learning experts to contribute to progress in machine learning through fundamental research. We are committed to open collaboration, and empowering more points of entry into machine learning research through our scholars program.

heading icon
Image of Sara hooker, head, Cohere for ai at Cohere

Sara hooker

head, Cohere for ai

Image of Marzieh Fadaee, Senior Research Scientist at Cohere

Marzieh Fadaee

Senior Research Scientist

Image of Julia Kreutzer, SENIOR RESEARCH SCIENTIST at Cohere

Julia Kreutzer

SENIOR RESEARCH SCIENTIST

Image of Ahmet Üstün, Senior Research Scientist at Cohere

Ahmet Üstün

Senior Research Scientist

Image of Beyza Ermis, Senior Research Scientist at Cohere

Beyza Ermis

Senior Research Scientist

Image of Madeline Smith, Operations and Community Lead at Cohere

Madeline Smith

Operations and Community Lead

Image of Aidan Peppin, Policy & Responsible AI Lead at Cohere

Aidan Peppin

Policy & Responsible AI Lead

Image of Brittawnya Prince, Operations Associate at Cohere

Brittawnya Prince

Operations Associate

Image of Arielle Salman Bailey, Operations Associate at Cohere

Arielle Salman Bailey

Operations Associate

Image of Saurabh Dash, Research Engineer at Cohere

Saurabh Dash

Research Engineer

Image of Daniel D'souza, Research Engineer at Cohere

Daniel D'souza

Research Engineer

Image of Alejandro Salamanca, Open Science Research Engineer at Cohere

Alejandro Salamanca

Open Science Research Engineer

Image of Shivalika Singh, Open Science Research Engineer at Cohere

Shivalika Singh

Open Science Research Engineer

Image of  Aakanksha, Research Scholar at Cohere

Aakanksha

Research Scholar

Image of Viraat Aryabumi, Research Scholar at Cohere

Viraat Aryabumi

Research Scholar

Image of John Dang, Research Scholar at Cohere

John Dang

Research Scholar

Image of Oliver Nan, Research Scholar at Cohere

Oliver Nan

Research Scholar

Image of Luísa Shimabucoro, Research Scholar at Cohere

Luísa Shimabucoro

Research Scholar

Image of Arash Ahmadian Dehkordi, Research Scholar at Cohere

Arash Ahmadian Dehkordi

Research Scholar

Frequently Asked Questions

  • What’s C4AI’s origin story?
    • In 2017, a team of friends, classmates, and engineers started a distributed research collaboration, with a focus on creating a medium for early-career AI enthusiasts to engage with experienced researchers – they called it “for.ai.” Two of those co-founding members, Aidan Gomez and Ivan Zhang, later went on to co-found Cohere, and many of the original members went on to do exciting things (pursuing PhDs, working at industry and academic labs).


      At the time, For AI was one of the first community-driven research groups to support independent researchers around the world. Today, Cohere is proud to reintroduce For AI as Cohere For AI, a dedicated research lab and community for exploring the unknown, together. Watch the C4AI history video here.

  • Do you charge for your educational programs or community membership?
    • We do not charge for participating in any of our programs, and are committed to supporting educational outreach programs, which include compute resources and infrastructure needed to participate in machine learning research.

  • are you hiring for research positions or interns?
    • Our full list of positions are listed here.

  • How can I stay in touch?
  • What is Aya?
    • Aya is a state-of-the-art, open source, massively multilingual research LLM covering 101 languages – including more than 50 previously underserved languages. Learn more here.

Background image for aesthetic purposes

Join our open science community

Collaborate with researchers, engineers, linguists, social scientists, and lifelong learners from 100+ countries on top-tier ML research.