Mistral + Moorcheh
This integration uses the Mistral Embeddings API withmistral-embed and Moorcheh vector namespaces to store and search vectors with ITS ranking.
The mistral-embed model returns 1024-dimensional vectors (L2-normalized; cosine similarity and dot product are equivalent).
Architecture
Embedding generation
mistralai SDK — client.embeddings.create(model="mistral-embed", inputs=[...])Vector storage
Store vectors in Moorcheh vector namespaces
Semantic retrieval
Embed the query with the same model and run vector search
Authentication
MISTRAL_API_KEY from the Mistral platformPrerequisites
MOORCHEH_API_KEYfrom the Moorcheh ConsoleMISTRAL_API_KEYfrom Mistral AI- Python 3.9+
mistralai. Use from mistralai.client import Mistral (Mistral SDK v2+). If you see ModuleNotFoundError: No module named 'mistralai', install with the line above using the same interpreter you use to run the script.
.env file
Vector dimensions
mistral-embed outputs 1024 dimensions per text. Set Moorcheh vector_dimension to 1024 for the namespace.
End-to-end example
The following example loads keys from.env, embeds chunks and a query with mistral-embed, uploads to Moorcheh, and runs similarity search.
Runnable demo script
Seeintegrations/mistral/mistral_moorcheh_demo.py.
Important notes
Vector dimension must match
Vector dimension must match
mistral-embed is 1024 dimensions. Create the Moorcheh namespace with vector_dimension=1024.Same model for index and query
Same model for index and query
Use
mistral-embed for both stored chunks and search queries.Store text on each vector
Store text on each vector
Include
text on each uploaded vector so search results can return the original chunk.Troubleshooting
- Auth errors: Verify
MISTRAL_API_KEYin.envor the environment. Dimension mismatch: Namespace must be 1024 for defaultmistral-embedoutput.