Creating Object Detection Datasets
How to convert a YOLO object detection dataset to Hub format.

This tutorial is also available as a Colab Notebook​

Object detection and image annotation using bounding boxes is one of the most common data types for Computer Vision datasets. This tutorial demonstrates how to convert an object detection dataset in YOLO format into Hub, and a similar process can be used for uploading object detection data in other formats.

Create the Hub Dataset

The first step is to download the small dataset below called animals object detection.
animals_od.zip
278KB
Binary
animals object detection dataset
The dataset has the following folder structure:
1
_animals_localization
2
|_images
3
|_image_1.jpg
4
|_image_2.jpg
5
|_image_3.jpg
6
|_image_4.jpg
7
|_boxes
8
|_image_1.txt
9
|_image_2.txt
10
|_image_3.txt
11
|_image_4.txt
12
|_classes.txt
Copied!
Now that you have the data, let's create a Hub Dataset in the ./animals_od_hubfolder by running:
1
import hub
2
from PIL import Image, ImageDraw
3
import numpy as np
4
import os
5
​
6
ds = hub.empty('./animals_od_hub') # Create the dataset
Copied!
Next, let's specify the folder paths containing the images and annotations in the dataset. In YOLO format, images and annotations are typically matched using a common filename such as image -> filename.jpeg and annotation -> filename.txt . It's also helpful to create a list of all of the image files and the class names contained in the dataset.
1
img_folder = './animals_od/images'
2
lbl_folder = './animals_od/boxes'
3
​
4
# List of all images
5
fn_imgs = os.listdir(img_folder)
6
​
7
# List of all class names
8
with open(os.path.join(lbl_folder, 'classes.txt'), 'r') as f:
9
class_names = f.read().splitlines()
Copied!
Since annotations in YOLO are typically stored in text files, it's useful to write a helper function that parses the annotation file and returns numpy arrays with the bounding box coordinates and bounding box classes.
1
def read_yolo_boxes(fn:str):
2
"""
3
Function reads a label.txt YOLO file and returns a numpy array of yolo_boxes
4
for the box geometry and yolo_labels for the corresponding box labels.
5
"""
6
7
box_f = open(fn)
8
lines = box_f.read()
9
box_f.close()
10
11
# Split each box into a separate lines
12
lines_split = lines.splitlines()
13
14
yolo_boxes = np.zeros((len(lines_split),4))
15
yolo_labels = np.zeros(len(lines_split))
16
17
# Go through each line and parse data
18
for l, line in enumerate(lines_split):
19
line_split = line.split()
20
yolo_boxes[l,:]=np.array((float(line_split[1]), float(line_split[2]), float(line_split[3]), float(line_split[4])))
21
yolo_labels[l]=int(line_split[0])
22
23
return yolo_boxes, yolo_labels
Copied!
Finally, let's create the tensors and iterate through all the images in the dataset in order to populate the data in Hub. Boxes and their labels will be stored in separate tensors, and for a given sample, the first axis of the boxes array corresponds to the first-and-only axis of the labels array (i.e. if there are 3 boxes in an image, the labels array is 3x1 and the boxes array is 3x4).
1
with ds:
2
ds.create_tensor('images', htype='image', sample_compression = 'jpeg')
3
ds.create_tensor('labels', htype='class_label', class_names = class_names)
4
ds.create_tensor('boxes', htype='bbox')
5
​
6
for fn_img in fn_imgs:
7
​
8
img_name = os.path.splitext(fn_img)[0]
9
fn_box = img_name+'.txt'
10
11
# Get the arrays for the bounding boxes and their classes
12
yolo_boxes, yolo_labels = read_yolo_boxes(os.path.join(lbl_folder,fn_box))
13
14
# Append data to tensors
15
ds.images.append(hub.read(os.path.join(img_folder, fn_img)))
16
ds.labels.append(yolo_labels.astype(np.uint32))
17
ds.boxes.append(yolo_boxes.astype(np.float32))
Copied!

Inspect the Hub Dataset

Let's check out the third sample from this dataset, which contains two bounding boxes.
1
# Draw bounding boxes for the third image
2
​
3
ind = 2
4
img = Image.fromarray(ds.images[ind ].numpy())
5
draw = ImageDraw.Draw(img)
6
(w,h) = img.size
7
boxes = ds.boxes[ind ].numpy()
8
​
9
for b in range(boxes.shape[0]):
10
(xc,yc) = (int(boxes[b][0]*w), int(boxes[b][1]*h))
11
(x1,y1) = (int(xc-boxes[b][2]*w/2), int(yc-boxes[b][3]*h/2))
12
(x2,y2) = (int(xc+boxes[b][2]*w/2), int(yc+boxes[b][3]*h/2))
13
draw.rectangle([x1,y1,x2,y2], width=2)
14
draw.text((x1,y1), ds.labels.info.class_names[ds.labels[ind].numpy()[b]])
Copied!
1
# Display the image and its bounding boxes
2
img
Copied!
Congrats! You just created a beautiful object detection dataset! πŸŽ‰
Note: For optimal object detection model performance, it is often important for datasets to contain images with no annotations (See the 4th sample in the dataset above). For that use case, in order to maintain equal length between the images, boxes, and labels tensors, users can upload empty numpy arrays as long as len(sample.shape) for an empty and non-empty sample is equal.
Therefore, an empty bounding box can be added using ds.boxes.append(np.zeros(0,4)) because len(sample.shape) == 2, just like for a bounding box with data.
Last modified 30d ago