Working with Videos
How manage video datasets and train models using Hub.

How to manage video datasets and train models using Hub.

Performing deep-learning on video data can be challenging due to the large size of video files, especially when they are uncompressed to raw numeric data that is fed into neural networks. Hub abstracts these challenges away from the user so you can focus on building performant models.

Setup

Make sure to install hub with pip install "hub[av]" in order to use Hub's audio and video features.
import hub
ds = hub.empty("demo/video") # create a local dataset

Creating a video tensor

To create a video tensor, we specify an htype of "video" and set sample_compression to the format of the video.
ds.create_tensor("videos", htype="video", sample_compression="mp4")

Adding video samples

We append videos to the newly created tensor by reading the video files with hub.read
ds.videos.append(hub.read("./videos/example1.mp4"))
ds.videos.append(hub.read("./videos/example2.mp4"))
hub.read can also read videos from http://, gcs:// and s3:// urls given you have the credentials to access them. Examples include:
ds.videos.append(
hub.read(
"http://commondatastorage.googleapis.com/gtv-videos-bucket/sample/ForBiggerBlazes.mp4",
creds=None,
)
)
ds.videos.append(
hub.read(
"s3://bucket-name/sample_video.mp4",
creds={
"aws_access_key_id": "...",
"aws_secret_access_key": "...",
"aws_session_token": "...",
},
)
)
See hub.read and check out this notebook to see this in action.

Accessing video metadata

Shape

We can get the shape of a video sample in (N, H, W, C) format using
ds.videos[0].shape
(400, 360, 640, 3)

Sample info

Info about a video sample can be accessed using
ds.videos[0].sample_info
This returns info about the first sample as a dict:
{
'duration': 400400,
'fps': 29.97002997002997,
'timebase': 3.3333333333333335e-05,
'shape': [400, 360, 640, 3],
'format': 'mp4',
'filename': './videos/example1.mp4',
'modified': False
}
duration is in units oftimebase

Accessing video frames

The most important part of working with videos on Hub is retrieving the frames of a video sample as a numpy array.
video = ds.videos[0].numpy()
This decompresses the entire first video sample and returns the frames as a numpy array.
print(type(video))
print(video.shape)
<class 'numpy.ndarray'>
(400, 360, 640, 3)
Be careful when decompressing an entire large video sample because it can blow up your memory.
Hub allows you to index the video tensor like a numpy array and return the frames you want. Only the required frames are decompressed. See a few examples below:
Getting a 100 frames from index 100 - 200
# 1st sample, frames 100 - 200
video = ds.videos[1, 100:200].numpy()
video.shape
(100, 360, 640, 3)
Indexing with step
# 0th sample, frames 100 - 200 with step of 5 frames
video = ds.videos[0, 100:200:5].numpy()
video.shape
(20, 360, 640, 3)
Getting a single frame
# 1st sample, last frame
last_frame = ds.videos[1, -1].numpy()
last_frame.shape
(360, 640, 3)

Accessing video timestamps

Presentation timestamps (PTS) of frames can be obtained (in seconds) through a video tensor's .timestamp attribute after indexing it just like in the previous section:
# timestamps of frames 10 - 15 of 0th sample
ds.videos[0, 10:15].timestamp
array([0.36703333, 0.4004 , 0.43376666, 0.46713334, 0.5005 ],
dtype=float32)

.data()

Calling ds.videos[index].data() will return a dict with keys 'frames' and 'timestamps' with the corresponding numpy arrays as values. Indexing works the same way as it does with .numpy().
data = ds.videos[1, 15:20].data()
data['frames'].shape
data['timestamps']
(5, 360, 640, 3)
array([0.5005 , 0.5672333 , 0.6006 , 0.6339667 , 0.76743335],
dtype=float32)

Visualizing videos

.play()

Individual video samples can be instantly visualized by calling .play() on them:
ds.videos[1].play()
This will play the video on your web browser:
video playback in browser
On a jupyter notebook this will look like:
video playback on jupyter notebook
This feature is not yet supported on colab

ds.visualize()

The whole hub dataset can be visualized by calling .visualize() on your dataset in a jupyter or colab notebook.
ds.visualize()
ds.visualize() on colab
Try this out for yourself here!
On colab, we only support visualizing hub://datasets

Linked videos

Tensors of hub type link[video] can be used to store links to videos. All of the above features are supported for linked videos. https://, gcs://, s3://and gdrive://links are accepted.
# create linked tensor
links = ds.create_tensor("video_links", htype="link[video]")
# append linked samples
links.append(hub.link("http://commondatastorage.googleapis.com/gtv-videos-bucket/sample/ForBiggerBlazes.mp4", creds_key=None)) # link to public video
# .numpy()
video = links[0].numpy()
# shape of numpy array
video.shape
(360, 720, 1280, 3)
You will need to set credentials to link to private data on your S3 or GCS.

For hub cloud datasets

This proccess is easy and streamlined for hub:// datasets.
  • First, go to your Activeloop platform, login and choose 'Managed credentials' in settings.
  • And then choose 'Add Credentials'.
  • Select a credentials provider, set the credentials name (say, 'MY_KEY'), fill the fields and save it.
  • Done! Your credentials have now been set.

Add managed credentials to your dataset

Use ds.add_creds_key with managed set to True to add the credentials to your dataset. Multiple credentials can be added.
ds.add_creds_key("MY_KEY", managed=True)
ds.add_creds_key("S3_KEY", managed=True)

Use credentials

And when adding linked data using hub.link, simply mention which credentials to use through the creds_key argument.
ds.links.append(hub.link("s3://my-bucket/sample-video.mp4", creds_key="MY_KEY"))

For non hub cloud datasets

For non-hub:// datasets, you can use credentials set in your environment by mentioning creds_key="ENV"
ds.links.append(hub.link("s3://my-bucket/sample-video.mp4", creds_key="ENV"))
Or you can temporarily add creds to your dataset
creds={
"aws_access_key_id": "...",
"aws_secret_access_key": "...",
"aws_session_token": "...",
}
# add creds key (Note that managed is False)
ds.add_creds_key("TEMP_KEY")
# populate creds with a credentials dict
ds.populate_creds("TEMP_KEY", creds)
and then
ds.links.append(hub.link("s3://my-bucket/sample-video.mp4", creds_key="TEMP_KEY"))
See hub.link

Video streaming

This section describes some implementation details regarding how video data is fetched and decompressed in Hub.
Large video samples (> 16MB by default) stored in remote hub datasets are not downloaded in their entirety on calling .numpy(). Instead, they are streamed from storage. Only the required packets are decompressed and converted to numpy arrays based on how the tensor is indexed.
.play()also streams videos from storage.