The primary objective for Deep Lake is to enable users to manage their data more easily so they can train better ML models. This tutorial shows you how to train a simple image classification model while streaming data from a Deep Lake dataset stored in the cloud.
Data Preprocessing
The first step is to select a dataset for training. This tutorial uses the Fashion MNIST dataset that has already been converted into Deep Lake format. It is a simple image classification dataset that categorizes images by clothing type (trouser, shirt, etc.)
import deeplakefrom PIL import Imageimport numpy as npimport os, timeimport torchfrom torchvision import datasets, transforms, models# Connect to the training and testing datasetsds_train = deeplake.load('hub://activeloop/fashion-mnist-train')ds_test = deeplake.load('hub://activeloop/fashion-mnist-test')
The next step is to define a transformation function that will process the data and convert it into a format that can be passed into a deep learning model. In this particular example, torchvision.transforms is used as a part of the transformation pipeline that performs operations such as normalization and image augmentation (rotation).
tform = transforms.Compose([ transforms.ToPILImage(), # Must convert to PIL image for subsequent operations to run transforms.RandomRotation(20), # Image augmentation transforms.ToTensor(), # Must convert to pytorch tensor for subsequent operations to run transforms.Normalize([0.5], [0.5]),])
You can now create a pytorch dataloader that connects the Deep Lake dataset to the PyTorch model using the provided method ds.pytorch(). This method automatically applies the transformation function, takes care of random shuffling (if desired), and converts Deep Lake data to PyTorch tensors. The num_workers parameter can be used to parallelize data preprocessing, which is critical for ensuring that preprocessing does not bottleneck the overall training workflow.
The transform input is a dictionary where the key is the tensor name and the value is the transformation function that should be applied to that tensor. If a specific tensor's data does not need to be returned, it should be omitted from the keys. If a tensor's data does not need to be modified during preprocessing, the transformation function is set as None.
This tutorial uses a pre-trained ResNet18 neural network from the torchvision.models module, converted to a single-channel network for grayscale images.
Training is run on a GPU if possible. Otherwise, run on a CPU.
# Use a pre-trained ResNet18model = models.resnet18(pretrained=True)# Convert model to grayscalemodel.conv1 = torch.nn.Conv2d(1, 64, kernel_size=7, stride=2, padding=3, bias=False)# Update the fully connected layer based on the number of classes in the datasetmodel.fc = torch.nn.Linear(model.fc.in_features, len(ds_train.labels.info.class_names))model.to(device)# Specity the loss function and optimizercriterion = torch.nn.CrossEntropyLoss()optimizer = torch.optim.SGD(model.parameters(), lr=0.001, momentum=0.1)
Training the Model
Helper functions for training and testing the model are defined. Note that the output from Deep Lake's PyTorch dataloader is fed into the model just like data from ordinary PyTorch dataloaders.
deftrain_one_epoch(model,optimizer,data_loader,device): model.train()# Zero the performance stats for each epoch running_loss =0.0 start_time = time.time() total =0 correct =0for i, data inenumerate(data_loader):# get the inputs; data is a list of [inputs, labels] inputs = data['images'] labels = torch.squeeze(data['labels']) inputs = inputs.to(device) labels = labels.to(device)# zero the parameter gradients optimizer.zero_grad()# forward + backward + optimize outputs =model(inputs.float()) loss =criterion(outputs, labels) loss.backward() optimizer.step() _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item() accuracy =100* correct / total# Print performance statistics running_loss += loss.item()if i %10==0:# print every 10 batches batch_time = time.time() speed = (i+1)/(batch_time-start_time)print('[%5d] loss: %.3f, speed: %.2f, accuracy: %.2f%%'% (i, running_loss, speed, accuracy)) running_loss =0.0 total =0 correct =0deftest_model(model,data_loader): model.eval() start_time = time.time() total =0 correct =0with torch.no_grad():for i, data inenumerate(data_loader):# get the inputs; data is a list of [inputs, labels] inputs = data['images'] labels = torch.squeeze(data['labels']) inputs = inputs.to(device) labels = labels.to(device)# zero the parameter gradients optimizer.zero_grad()# forward + backward + optimize outputs =model(inputs.float()) _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item() accuracy =100* correct / totalprint('Finished Testing')print('Testing accuracy: %.1f%%'%(accuracy))
The model and data are ready for training🚀!
num_epochs =1for epoch inrange(num_epochs):# loop over the dataset multiple timesprint("------------------ Training Epoch {} ------------------".format(epoch+1))train_one_epoch(model, optimizer, train_loader, device)test_model(model, test_loader)print('Finished Training')
Congrats! You successfully trained an image classification model while streaming data directly from the cloud! 🎉