Vector Search Options
Overview of Vector Search Options in Deep Lake
Overview of Vector Search Options in Deep Lake
Deep Lake offers a variety of vector search options depending on the Storage Location of the Vector Store and infrastructure that should run the computations.
In memory or local
Client-side
Deep Lake OSS Python Code
Deep Lake Storage
Client-side
Deep Lake C++
Deep Lake Managed Tensor Database
Managed Database
Deep Lake C++
APIs for Search
Vector search can occur via a variety of APIs in Deep Lake. They are explained in the links below:
Deep Lake Vector Store APIManaged Database REST APILangChain APIOverview of Options for Search Computation Execution
The optimal option for search execution is automatically selected based on the Vector Stores storage location. The different computation options are explained below.
Python (Client-Side)
Deep Lake OSS offers query execution logic that run on the client (your machine) using OSS code in Python. This compute logic is accessible in all Deep Lake Python APIs and is available for Vector Stores stored in any location. See individual APIs below for details.
Compute Engine (Client-Side)
Deep Lake Compute Engine offers query execution logic that run on the client (your machine) using C++ Code that is called via Python API. This compute logic is accessible in all Deep Lake Python APIs and is only available for Vector Stores stored Deep Lake storage or in user clouds connected to Deep Lake. See individual APIs below for details.
To run queries using Compute Engine, make sure to !pip install "deeplake[enterprise]"
.
Managed Tensor Database (Server-Side Running Compute Engine)
Deep Lake offers a Managed Tensor Database that executes queries on Deep Lake infrastructure while running Compute Engine under-the-hood. This compute logic is accessible in all Deep Lake Python APIs and is only available for Vector Stores stored in the Deep Lake Managed Tensor Database. See individual APIs below for details.