Documentation
LangChain

LangChain

LangChain (opens in a new tab) is a framework designed to simplify the creation of applications using large language models.

To get started with LangChain, follow the instructions (opens in a new tab).

Document loader

Cube's integration with LangChain comes as the document loader (opens in a new tab) that is intended to be used to populate a vector database with embeddings derived from the data model. Later, this vector database can be queried to find best-matching entities of the semantic layer. This is useful to match free-form input, e.g., queries in a natural language, with the views and their members in the data model.

We're also providing an chat-based demo application (see source code on GitHub) with example OpenAI prompts for constructing queries to Cube's SQL API. If you wish to create an AI-powered conversational interface for the semantic layer, these prompts can be a good starting point.

Configuring the connection to Cube

The document loader connects to Cube using the REST API, and will need a JWT to authenticate.

If you're using Cube Cloud, you can retrieve these details from a deployment's Overview page.

Querying Cube

Please refer to the blog post (opens in a new tab) for details on querying Cube and building a complete AI-based application.

Also, please feel free to review a chat-based demo application source code (opens in a new tab) on GitHub.