Introducing Command R+: Our new, most powerful model in the Command R family.

Learn More
Background image for aesthetic purposes


Investment Research Assistant


A financial research platform for investment firms wanted to build a natural language interface for its clients to ask complex questions and get synthesized answers across financial reports, analyst research, investor call transcripts, and other data. The company initially needed to improve search relevance to boost client productivity and satisfaction, and then it wanted to build a full conversational AI assistant with retrieval-augmented generation (RAG).


The company discovered Cohere Rerank  through the AWS Marketplace. They fine-tuned Rerank and  implemented it, which provided an immediate improvement in search relevance on top of existing legacy search tools. Subsequently, the customer is implementing an embedding solution using Cohere Embed to further improve search performance. Lastly, they will build the AI assistant with a fine-tuned Command that will allow users to ask questions and get synthesized answers with citations in a natural language interface.

How it works


Unstructured financial documents are embedded by Embed and stored in a vector database.


Fine-tuned Command interprets a user’s requests and creates queries to retrieve relevant answers across legacy and vector search sources.


Fine-tuned Rerank re-orders responses based on relevance, improving the accuracy of search results.


Fine-tuned Command synthesizes a conversational response back to the user, incorporating answers along with citations.


Improved search accuracy

Increased client satisfaction

Higher client productivity

The Cohere Difference

Leading model accuracy

Leading model accuracy

Cohere’s retrieval prioritizes accurate responses and citations

Accelerated enterprise deployment

Accelerated enterprise deployment

Cohere’s models come with connectors to common data sources



Cohere’s models can be fine-tuned to further improve domain performance



Cohere’s powerful inference frameworks optimize throughput and reduce compute requirements

Flexible deployment

Flexible deployment

Cohere models can be accessed through a SaaS API, on cloud infrastructure (Amazon SageMaker, Amazon Bedrock, OCI Data Science, Google Vertex AI, Azure AI), and private deployments (virtual private cloud and on-premises)

Multilingual support

Multilingual support

Over 100 languages are supported, so the same topics, products, and issues are identified the same way