Introducing Command R+: Our new, most powerful model in the Command R family.

Learn More
Background image for aesthetic purposes

TECHNOLOGY

Improved Search

CHALLENGE

A major provider of SaaS workforce collaboration and productivity tools wanted to improve its search capabilities. Its customers were often frustrated trying to find documents with its legacy search tools.


SOLUTION

Cohere Rerank was integrated into the company’s search infrastructure. Rerank ingested the output of the existing search tools, reordering the responses based on relevance to the user’s search query, resulting in far more relevant answers and higher customer satisfaction.



How it works

STEP 1.

Existing lexical and semantic search tools are queried with the user search term

STEP 2.

Cohere Rerank embeds the original query and search results in real time

STEP 3.

Rerank returns matches re-ordered by semantic relevance to the original query, with relevance scores

Impact

Improved search relevance

Improved user satisfaction

Fast time to live; easy to integrate

The Cohere Difference

Leading model accuracy

Leading model accuracy

Cohere’s embedding prioritizes accurate reranking, even with noisy datasets

Accelerated enterprise deployment

Accelerated enterprise deployment

Cohere’s models come with connectors to common data sources

Customization

Customization

Cohere’s models can be fine-tuned to further improve domain performance

Scalability

Scalability

Cohere’s powerful inference frameworks optimize throughput and reduce compute requirements

Flexible deployment

Flexible deployment

Cohere models can be accessed through a SaaS API, on cloud infrastructure (Amazon SageMaker, Amazon Bedrock, OCI Data Science, Google Vertex AI, Azure AI), and private deployments (virtual private cloud and on-premises)

Multilingual support

Multilingual support

Over 100 languages are supported, so the same topics, products, and issues are identified the same way