Using Deep Lake for Vector Store in RAG applications.

Deep Lake as a Vector Store for LLM Applications

  • Store and search embeddings and their metadata including text, jsons, images, audio, video, and more. Save the data locally, in your cloud, or on Deep Lake storage.

  • Build Retrieval Augmented Generation (RAG) Apps using our integrations with LangChain and LlamaIndex

  • Run computations locally or on our Managed Tensor Database

Last updated