Storage Options
How to authenticate using Activeloop storage, AWS S3, and Google Cloud Storage.
Deep Lake datasets can be stored locally, or on several cloud storage providers including Activeloop Storage, AWS S3, and Google Cloud Storage. Datasets are accessed by choosing the correct prefix for the dataset path
that is passed to methods such as deeplake.load(path)
, deeplake.dataset(path)
, and deeplake.empty(path)
. The path prefixes are:
Storage Location | Path |
Activeloop |
|
AWS S3 |
|
Google Cloud |
|
If you chose to manage your credentials in Activeloop Platform, you can access datasets in your own Cloud Buckets using the Deep Lake path hub://org_name/dataset_name
without having to pass credentials in the Python API.
Authentication for each cloud storage provider:
Activeloop Storage and Managed Datasets
In order to gain access in Python to datasets stored in Activeloop, or datasets in other clouds that are managed by Activeloop, users must register on Activeloop Platform or through the CLI, and login through the CLI using:
Authentication using tokens
Authentication can also be performed using tokens, which can be created after registration on Activeloop Platform (Profile -> API tokens). Tokens can be passed to any Deep Lake function that requires authentication:
Credentials created using the CLI login !activeloop login
expire after 1000 hrs. Credentials created using API tokens in Activeloop Platform expire after the time specified for the individual token. Therefore, long-term workflows should be run using API tokens in order to avoid expiration of credentials mid-workflow.
AWS S3
Authentication with AWS S3 has 4 options:
Use Deep Lake on a machine in the AWS ecosystem that has access to the relevant S3 bucket via AWS IAM, in which case there is no need to pass credentials in order to access datasets in that bucket.
Configure AWS through the cli using
aws configure
. This creates a credentials file on your machine that is automatically access by Deep Lake during authentication.Save the
AWS_ACCESS_KEY_ID
,AWS_SECRET_ACCESS_KEY
, andAWS_SESSION_TOKEN (optional)
in environmental variables of the same name, which are loaded as default credentials if no other credentials are specified.Create a dictionary with the
AWS_ACCESS_KEY_ID
,AWS_SECRET_ACCESS_KEY
, andAWS_SESSION_TOKEN (optional)
, and pass it to Deep Lake using:Note: the dictionary keys must be lowercase!
endpoint_url
can be used for connecting to other object storages supporting S3-like API such as MinIO, StorageGrid and others.
Custom Storage with S3 API
In order to connect to other object storages supporting S3-like API such as MinIO, StorageGrid and others, simply add endpoint_url
the the creds
dictionary.
Google Cloud Storage
Authentication with Google Cloud Storage has 2 options:
Create a service account, download the JSON file containing the keys, and then pass that file to the
creds
parameter indeeplake.load('gcs://.....', creds = 'path_to_keys.json')
. It is also possible to manually pass the information from the JSON file into thecreds
parameter using:deeplake.load('gcs://.....', creds = {information from the JSON file})
Authenticate through the browser using
deeplake.load('gcs://.....', creds = 'browser')
. This requires that the project credentials are stored on your machine, which happens aftergcloud
is initialized and logged in through the CLI.After this step, re-authentication through the browser can be skipped using:
deeplake.load('gcs://.....', creds = 'cache')
Last updated