Privacy and Security
With Cube’s AI API, your credentials are never shared with AI, and neither is the connection to your data store. All access to the AI API is governed by the same security context as anything else in Cube Cloud.
Data Retention Policy
By default, the Cube AI API uses Anthropic models via GCP VertexAI. Your data isn’t used by Google or Anthropic to train models or improve products.
- Google does not retain customer data or use it for training or model improvement purposes.
- Usage is governed by the Anthropic on Vertex Commercial Terms of Service (opens in a new tab), which specify that Anthropic does not receive access to prompts or outputs and may not train models on customer data.
Dynamic grounding with secure data retrieval
- Relevant information from your Cube semantic layer is merged with the prompt to provide context.
- The metadata available for grounding the prompt is limited to the permissions of the user executing the prompt.
- Secure data retrieval preserves in place all standard Cube role-based controls for user permissions and column/row level access when merging grounding data from your Cube semantic layer.
Prompt Defense
- Context provided by the semantic layer limits hallucinations by the LLM.
- LLMs interface with existing Cube APIs further constraining their ability and limiting hallucinations, whilst providing enhanced transparency
Data Masking
- Data masking policies enforced by Cube are also enforced in AI API usage.
- You can configure what must and must not be masked in the Cube Semantic Layer.