Vector Search Options

Overview of Vector Search Options in Deep Lake

Overview of Vector Search Options in Deep Lake

Deep Lake offers a variety of vector search options depending on the storage location of the Vector Store and infrastructure + code that should run the computations.

Search Method
Compute Location
Execution Algorithm
Query Syntax
Required Storage

Python

Client-side

Deep Lake OSS Python Code

LangChain API

In memory, local, user cloud, Tensor Database

Client-side

Deep Lake C++ Compute Engine

LangChain API or TQL

User cloud (must be connected to Deep Lake), Tensor Database

Managed Database

Deep Lake C++ Compute Engine

LangChain API or TQL

Tensor Database

Overview of Search Computation Execution

Python (Client-Side)

Deep Lake OSS offers query execution logic that run on the client (your machine) using OSS code in Python. This compute logic is accessible in all Deep Lake Python APIs and is available for Vector Stores stored in any location. See individual APIs below for details.

Compute Engine (Client-Side)

Deep Lake Compute Engine offers query execution logic that run on the client (your machine) using C++ Code that is called via Python API. This compute logic is accessible in all Deep Lake Python APIs and is only available for Vector Stores stored Deep Lake storage or in user clouds connected to Deep Lake. See individual APIs below for details.

To run queries using Compute Engine, make sure to !pip install "deeplake[enterprise]".

Managed Tensor Database (Server-Side Running Compute Engine)

Deep Lake offers a Managed Tensor Database that executes queries on Deep Lake infrastructure while running Compute Engine under-the-hood. This compute logic is accessible in all Deep Lake Python APIs and is only available for Vector Stores stored in the Deep Lake Managed Tensor Database. See individual APIs below for details.

Vector search can occur via a variety of APIs in Deep Lake. They are explained in the links below:

Deep Lake Vector Store APIREST APILangChain API

Last updated