Working with Videos
How manage video datasets and train models using Deep Lake.
How to manage video datasets and train models using Deep Lake.
Performing deep-learning on video data can be challenging due to the large size of video files, especially when they are uncompressed to raw numeric data that is fed into neural networks. Deep Lake abstracts these challenges away from the user so you can focus on building performant models.
Setup
Make sure to install Deep Lake with pip install "deeplake[av]"
in order to use Deep Lake's audio and video features.
Creating a video tensor
To create a video tensor, we specify an htype
of "video" and set sample_compression
to the format of the video.
Adding video samples
We append videos to the newly created tensor by reading the video files with deeplake.read
deeplake.read
can also read videos from http://
, gcs://
and s3://
urls given you have the credentials to access them. Examples include:
See deeplake.read
and check out this notebook to see this in action.
Adding annotations
See a complete example for this section in this notebook.
Annotations like bounding boxes can be added and visualized in Deep Lake along with the video samples. We use tensors of htype sequence[bbox]
for this purpose. Every sample in a sequence[bbox]
tensor will be a sequence of bounding boxes which represents the annotations for the corresponding video sample in the video
tensor.
Learn more about sequences here.
See this page for more details about the bbox
htype.
Next, here's an example of an annotations file taken from the LaSOT dataset. It contains annotations for every frame of a video.
We convert this to a numpy array and append it to our boxes
tensor.
Visualize the bounding boxes within your notebook using ds.visualize()
.
The shapes of the samples in the video
and sequence[bbox]
tensors have to match in order for visualization to work properly.
If the shape of video tensor is (# frames, height, width, 3)
, the shape of the sequence tensor should be(# frames, # of boxes in a frame, 4)
Accessing video metadata
Shape
We can get the shape of a video sample in (N, H, W, C) format using
Sample info
Info about a video sample can be accessed using
This returns info about the first sample as a dict:
duration
is in units oftimebase
Accessing video frames
The most important part of working with videos on Deep Lake is retrieving the frames of a video sample as a numpy array.
This decompresses the entire first video sample and returns the frames as a numpy array.
Be careful when decompressing an entire large video sample because it can blow up your memory.
Deep Lake allows you to index the video tensor like a numpy array and return the frames you want. Only the required frames are decompressed. See a few examples below:
Getting a 100 frames from index 100 - 200
Indexing with step
Getting a single frame
Accessing video timestamps
Presentation timestamps (PTS) of frames can be obtained (in seconds) through a video tensor's .timestamp
attribute after indexing it just like in the previous section:
.data()
.data()
Calling ds.videos[index].data()
will return a dict with keys 'frames' and 'timestamps' with the corresponding numpy arrays as values. Indexing works the same way as it does with .numpy()
.
Visualizing videos
.play()
.play()
Individual video samples can be instantly visualized by calling .play()
on them:
This will play the video on your web browser:
On a jupyter notebook this will look like:
This feature is not yet supported on colab
ds.visualize()
ds.visualize()
The whole Deep Lake dataset can be visualized by calling .visualize()
on your dataset in a jupyter or colab notebook.
Try this out for yourself here!
On colab, we only support visualizing hub://
datasets
Linked videos
Tensors of Deep Lake type link[video]
can be used to store links to videos. All of the above features are supported for linked videos. https://
, gcs://
, s3://
and gdrive://
links are accepted.
You will need to set credentials to link to private data on your S3 or GCS.
For Activeloop cloud datasets
This proccess is easy and streamlined for deeplake://
datasets.
First, go to your Activeloop platform, login and choose 'Managed credentials' in settings.
And then choose 'Add Credentials'.
Select a credentials provider, set the credentials name (say, 'MY_KEY'), fill the fields and save it.
Done! Your credentials have now been set.
Add managed credentials to your dataset
Use ds.add_creds_key
with managed
set to True to add the credentials to your dataset. Multiple credentials can be added.
Use credentials
And when adding linked data using deeplake.link
, simply mention which credentials to use through the creds_key
argument.
For non Activeloop cloud datasets
For non-hub://
datasets, you can use credentials set in your environment by mentioning creds_key="ENV"
Or you can temporarily add creds to your dataset
and then
See deeplake.link
Video streaming
This section describes some implementation details regarding how video data is fetched and decompressed in Deep Lake.
Large video samples (> 16MB by default) stored in remote Deep Lake datasets are not downloaded in their entirety on calling .numpy()
. Instead, they are streamed from storage. Only the required packets are decompressed and converted to numpy arrays based on how the tensor is indexed.
.play()
also streams videos from storage.
Last updated