Last updated
Last updated
This tutorial assumes the reader has knowledge of Deep Lake APIs and does not explain them in detail. For more information, check out our or .
is a tool that can be used to manage Deep Lake locks and ensure that only 1 worker is writing to a Deep Lake dataset at a time. It offers a simple API for managing locks using a few lines of code.
First, let's install Zookeper and launch a local server using Docker in the CLI.
All write operations should be executed while respecting the lock.
Let's connect a Python client to the local server and create a WriteLock
using:
The client can be blocked from performing operations without a WriteLock using the code below. The code will wait until the lock becomes available, and the internal Deep Lake lock should be disabled by specifying lock_enabled=False
:
If the write operations are only appending data, it is not necessary to use locks during read operations like as vector search. However, the Deep Lake datasets must be reloaded or re-initialized in order to have the latest available information from the write operations.
If the write operations are updating or deleting rows of data, the read operations should also lock the dataset in order to avoid corrupted read operations.
Let's connect a Python client to the same local server above and create a ReadLock
. Multiple clients can have a ReadLock
without blocking each other, but they will all be blocked by the WriteLock
above.
The syntax for restricting operations using the ReadLock
is:
Congrats! You just learned how manage your own lock for Deep Lake using Zookeeper! 🎉
Using Zookeeper for locking Deep Lake datasets.