Moorcheh Enterprise Changelog
Track enterprise-specific releases in one place, including access control, metadata filtering, enterprise retrieval controls, and ingestion architecture updates.Deployment footprint
- Current enterprise deployments run on both AWS and GCP.
User management and access control
- Admin can create namespaces.
- Admin can delete namespaces, including async delete flows for large namespaces.
- Admin can invite users and list invited users.
- Admin can grant namespace access to specific users.
- Admin can revoke namespace access.
- Admin can remove users when needed.
Namespace access model
- Namespaces can be owned by a user/admin and shared to other users.
- Access is managed at namespace granularity.
- Search and answer flows honor namespace-level access boundaries.
- This enables secure multi-tenant collaboration and controlled sharing.
Namespace lifecycle and data management
- Create namespace.
- List namespaces.
- Delete namespace.
- Delete namespace data.
- Get namespace data/documents.
- Document status tracking.
Metadata capabilities
File-level metadata management
- Metadata is supported in file upload, upload-url, text upload, and vector upload flows.
- Metadata can be updated for existing documents/items after ingestion.
- Supported metadata types include string, number, boolean, and array.
- Metadata values support spaces and most special characters.
#is excluded for indexed filtering use-cases.
Metadata filter feature (search-time filtering)
- Search supports structured metadata filters.
- Per-filter matching mode (
match) controls multi-value evaluation inside a filter. - Cross-filter combination mode (
combine) controls evaluation across filters. - Logical patterns support both
anyandall.
Metadata as a sharing primitive
- Metadata-driven filtering can model sharing and visibility logic (team scope, ownership hints, publish flags, access tags).
- This enables consistent retrieval enforcement in search and answer experiences.
Search and RAG enterprise controls
- In addition to core platform search, enterprise supports metadata-constrained retrieval and policy-like filtering behavior.
- Inline keyword constraints with
#keywordare supported as a refinement stage. - One-call answer APIs support retrieval + context assembly + generation with namespace-scoped grounding.
- Model selection controls can be applied by deployment and governance needs.
Region and model flexibility
- Current AWS embedding model is Cohere Embed v4 (
cohere.embed-v4, runtime variantcohere.embed-v4:0on Bedrock). - AI models can be changed based on enterprise requirements.
- AWS deployments can choose Bedrock AI models by target region.
- GCP deployments can also select models based on regional and customer requirements.
- This supports compliance, latency, and cost/performance alignment.
Large file upload (up to 5 GB) via upload URL
- Client requests upload URL from API (
upload-urlendpoint). - API returns pre-signed S3 PUT URL and storage target details.
- Client uploads file directly to object storage.
- Backend processing pipeline ingests file asynchronously.
- Chunks, embeddings, and metadata are written to data stores for search and RAG.
Processing pipeline overview
- Document, text, and vector ingestion jobs run asynchronously.
- Embedding and vector preparation are handled in worker flows.
- Chunk persistence is handled in worker flows.
- Namespace/data deletion jobs and document status updates run asynchronously.
Implementation coverage
- Enterprise features are implemented across both Node.js and Rust service stacks.
- Coverage includes user/access management, namespace lifecycle, ingestion, metadata update/filter flows, enterprise retrieval controls, answer workflows, and upload-url large-file support.
Feature summary for enterprise stakeholders
- Admin-led user and namespace governance.
- Namespace sharing and access control.
- Full metadata lifecycle support (attach, update, filter).
- Advanced metadata filtering logic for retrieval control.
- Enterprise retrieval controls layered on top of platform search capabilities.
- One-call RAG with configurable AI model selection by region/choice.
- Large-file direct upload flow (up to 5 GB).
- Parallel Node.js and Rust implementation coverage for core platform capabilities.