Tensor Query Language (TQL)
Deep Lake offers a highly-performant SQL-style query engine for filtering your data.
How to query datasets using the Deep Lake Tensor Query Language (TQL)
Querying datasets is a critical aspect of data science workflows that enables users to filter datasets and focus their work on the most relevant data. Deep Lake offers a highly-performant query engine built in C++ and optimized for the Deep Lake data format.
Querying features in the python API are installed using pip install "deeplake[enterprise]"
. Details on all installation options are available here.
Dataset Query Summary
Querying in the UI
Querying in the Vector Store Python API
Querying in the low-level Python API
Queries can also be performed in the Python API using:
Saving and utilizing dataset query results in the low-level Python API
The query results (Dataset Views
) can be saved in the UI as shown above, or if the view is generated in Python, it can be saved using the Python API below. Full details are available here.
In order to maintain data lineage, Dataset Views
are immutable and are connected to specific commits. Therefore, views can only be saved if the dataset has a commit and there are no uncommitted changes in the HEAD
. You can check for this using ds.has_head_changes
Dataset Views
can be loaded in the python API and they can passed to ML frameworks just like regular datasets:
The optimize
parameter in ds.load_view(...,
optimize = True
)
materializes the Dataset View
into a new sub-dataset that is optimized for streaming. If the original dataset uses linked tensors, the data will be copied to Deep Lake format.
Optimizing the Dataset View
is critical for achieving rapid streaming.
If the saved Dataset View
is no longer needed, it can be deleted using:
Query Syntax
TQL SyntaxLast updated