Storage Synchronization
Synchronizing data with long-term storage and achieving optimal performance using Deep Lake.
Using
with
context when updating Deep Lake datasets is critical for achieving rapid write performance.By default, any standalone update to a Deep Lake dataset is immediately pushed to the dataset's long-term storage location. Due to the sheer number of discreet write operations, there may be a significant increase in runtime, especially when the data is stored in the cloud. In the example below, an update is pushed to storage for every call to the
.append()
command.for i in range(10):
ds.my_tensor.append(i)
To reduce the runtime when using Deep Lake, the
with
syntax below significantly improves performance because it only pushes updates to long-term storage after the code block inside the with
statement has been executed, or when the local cache is full. This significantly reduces the number of discreet write operations, thereby increasing the speed by up to 100X. with ds:
for i in range(10):
ds.my_tensor.append(i)