What is Hub?¶
The fastest way to access and manage datasets for PyTorch and TensorFlow
Hub provides fast access to the state-of-the-art datasets for Deep Learning, enabling data scientists to manage them, build scalable data pipelines and connect to Pytorch and Tensorflow
Problems with Current Workflows¶
We realized that there are a few problems related with current workflow in deep learning data management through our experience of working with deep learning companies and researchers. Most of the time Data Scientists/ML researchers work on data management and preprocessing instead of doing modeling. Deep Learning often requires to work with large datasets. Those datasets can grow up to terabyte or even petabyte size.
It is hard to manage data, version control and track.
It is time-consuming to download the data and link with the training or inference code.
There is no easy way to access a chunk of it and possibly visualize.
Wouldn’t it be more convenient to have large datasets stored & version-controlled as single numpy-like array on the cloud and have access to it from any machine at scale?
- Local Development
- Why Hub?
- API Reference
- Data Pipelines
- Dataset Filtering