Mistral AI
Mistral AI integration allows your product to connect with Mistral's large language model API to perform natural language understanding, text generation, summarization, code assistance, and reasoning using high-performance open-weight and hosted models such as Mistral Small, Mistral Large, and Mixtral. This integration is ideal for applications that need fast, cost-efficient, and privacy-friendly AI models — especially in enterprise or on-prem environments.
Credentials Needed
To connect with Mistral's API, you need an API Key associated with your Mistral account.
Required credentials:
- Mistral API Key (Bearer token) — e.g.,
sk-xxxxxxxxxxxxxxxx
API keys provide full access to your Mistral account quota — handle them securely and never expose them client-side.
Permissions Needed / API Scopes
Mistral currently uses a single API key authentication model. There are no granular scopes — the key grants access to all endpoints available in your account.
| Functionality | Endpoint | Description |
|---|---|---|
| Chat / Completion | /v1/chat/completions | Generate text, code, or reasoning output |
| Embeddings | /v1/embeddings | Generate vector embeddings for semantic tasks |
| Model Info | /v1/models | List available Mistral models |
| Streaming | /v1/chat/completions (with stream=true) | Stream real-time responses |
Creating Users / Access Tokens
Step 1: Generate an API Key
- Go to the official Mistral platform: https://console.mistral.ai
- Log in or create an account
- Navigate to API Keys in the left sidebar
- Click + New Key
- Provide a name for your key (e.g.,
MistralIntegrationKey) - Copy and securely store the API Key — it will be shown only once
Test Connectivity
You can test the Mistral API connectivity using curl or any OpenAI-compatible SDK (since Mistral follows a similar schema):
curl https://api.mistral.ai/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer <MISTRAL_API_KEY>" \
-d '{
"model": "mistral-large-latest",
"messages": [
{"role": "user", "content": "Hello, Mistral! Please confirm the integration is working."}
]
}'
If the response returns a JSON message with generated content, your API key and connection are correctly configured.
Save the Results in the Platform and Create Connection
- In your platform's integration setup, securely store:
MISTRAL_API_KEY- (Optional) Default model name (e.g.,
mistral-large-latest)
- Label the connection as Mistral AI Integration
- During setup, test the connection by making a small text generation or reasoning request
- Save the verified connection for further use across features like chat, analysis, or generation
Best Practices
- Store API keys securely in your secret manager or encrypted configuration (never in client code)
- Rotate API keys regularly and immediately revoke unused ones
- Use the right model for the task:
mistral-small-latest: cost-effective and fastmistral-large-latest: for reasoning and enterprise workloadsmixtral-8x7b: mixture-of-experts for balanced performance
- Implement rate-limiting and retry logic for
429or transient network errors - Use streaming mode for long responses to improve user experience
- Log minimal metadata for debugging — never log full prompt or response contents
- Monitor usage and billing in the Mistral Console to manage costs effectively
- If building a SaaS product, proxy API requests through your backend rather than exposing the key client-side