Introducing Command R+: Our new, most powerful model in the Command R family.

Learn More
Background image for aesthetic purposes

PROFESSIONAL SERVICES

Executive AI Assistant

CHALLENGE

A global financial consultancy wanted to build an executive AI assistant for a national telecom company that could support executive decision-making. To meet the client’s requirements, the solution needed to extract information from internal document stores and real-time data sources, and then perform calculations on the fly. Therefore, a purely model-based solution would be insufficient.


SOLUTION

Using Cohere Command,  Embed, and Rerank, with  retrieval-augmented generation (RAG), the firm was able to  leverage Command’s multi-step capabilities and external agents (e.g., calculators and stock price sources) to retrieve and manipulate structured data. With this custom solution, the models break retrieval and computation tasks into multiple steps, allowing executives to ask complex requests like “Explain how my services margins compare to regional competitors.”

How it works

STEP 1.

Unstructured knowledge base documents are embedded by Embed and stored in a vector database

STEP 2.

Command creates a structured work plan and queries to answer a user’s request to retrieve information from connected data sources and tools (e.g., calculators)

STEP 3.

Rerank re-orders the responses from search tools based on relevance to original queries, improving the accuracy of the search results

STEP 4.

Command triggers another step in the work plan if required, otherwise it synthesizes a conversational response, along with citations, back to the user

Impact

Greater executive and staff productivity

Faster decision-making

Immediate answers to complex, real-time questions

The Cohere Difference

Leading model accuracy

Leading model accuracy

Cohere’s retrieval prioritizes accurate responses and citations

Accelerated enterprise deployment

Accelerated enterprise deployment

Cohere’s models come with connectors to common data sources

Customization

Customization

Cohere’s models can be fine-tuned to further improve domain performance

Scalability

Scalability

Cohere’s powerful inference frameworks optimize throughput and reduce compute requirements

Flexible deployment

Flexible deployment

Cohere models can be accessed through a SaaS API, on cloud infrastructure (Amazon SageMaker, Amazon Bedrock, OCI Data Science, Google Vertex AI, Azure AI), and private deployments (virtual private cloud and on-premises)

Multilingual support

Multilingual support

Over 100 languages are supported, so the same topics, products, and issues are identified the same way