Deep Learning Quickstart

A jump-start guide to using Deep Lake for Deep Learning.

How to Get Started with Deep Learning in Deep Lake in Under 5 Minutes

Installing Deep Lake

Deep Lake can be installed using pip. By default, Deep Lake does not install dependencies for video, google-cloud, compute engine, and other features. Details on all installation options are available here.

!pip install deeplake

Fetching Your First Deep Lake Dataset

Let's load the Visdrone dataset, a rich dataset with many object detections per image. Datasets hosted by Activeloop are identified by the host organization id followed by the dataset name: activeloop/visdrone-det-train.

import deeplake

dataset_path = 'hub://activeloop/visdrone-det-train'
ds = deeplake.load(dataset_path) # Returns a Deep Lake Dataset but does not download data locally

Reading Samples From a Deep Lake Dataset

Data is not immediately read into memory because Deep Lake operates lazily. You can fetch data by calling the .numpy() or .data() methods:

# Indexing
image = ds.images[0].numpy() # Fetch the first image and return a numpy array
labels = ds.labels[0].data() # Fetch the labels in the first image

# Slicing
img_list = ds.labels[0:100].numpy(aslist=True) # Fetch 100 labels and store 
                                               # them as a list of numpy arrays

Other metadata such as the mapping between numerical labels and their text counterparts can be accessed using:

labels_list = ds.labels.info['class_names']

Visualizing a Deep Lake Dataset

Deep Lake enables users to visualize and interpret large datasets. The tensor layout for a dataset can be inspected using:

ds.summary()

The dataset can be visualized in the Deep Lake UI, or using an iframe in a Jupyter notebook:

ds.visualize()

Visualizing datasets in the Deep Lake UI will unlock more features and faster performance compared to visualization in Jupyter notebooks.

Creating Your Own Deep Lake Datasets

You can access all of the features above and more with your own datasets! If your source data conforms to one of the formats below, you can ingest them directly with 1 line of code. The ingestion functions support source data from the cloud, as well as creation of Deep Lake datasets in the cloud.

For example, a COCO format dataset can be ingested using:

dataset_path = 's3://bucket_name_deeplake/dataset_name' # Destination for the Deep Lake dataset

images_folder = 's3://bucket_name_source/images_folder'
annotations_files = ['s3://bucket_name_source/annotations.json'] # Can be a list of COCO jsons.

ds = deeplake.ingest_coco(images_folder, annotations_files, dataset_path, src_creds = {...}, dest_creds = {...})

For creating datasets that do not conform to one of the formats above, you can use our methods for manually creating datasets, tensors, and populating them with data.

Authentication

To use Deep Lake features that require authentication (Activeloop storage, Tensor Database storage, connecting your cloud dataset to the Deep Lake UI, etc.) you should register in the Deep Lake App and authenticate on the client using the methods in the link below:

Environmental Variable

Set the environmental variable ACTIVELOOP_TOKEN to your API token. In Python, this can be done using:

os.environ['ACTIVELOOP_TOKEN'] = <your_token>

Pass the Token to Individual Methods

You can pass your API token to individual methods that require authentication such as:

ds = deeplake.load('hub://org_name/dataset_name', token = <your_token>)

Next Steps

Check out our Getting Started Guide for a comprehensive walk-through of Deep Lake. Also check out tutorials on Running Queries, Training Models, and Creating Datasets, as well as Playbooks about powerful use-cases that are enabled by Deep Lake.