v2.5.0
Datasets ⭐
EXAMPLE CODE
SWAG Dataset
Load the SWAG dataset in Python fast with one line of code. 113,000 multiple-choice questions. Stream SWAG while training ML models in PyTorch and TensorFlow.

SWAG Dataset

What is SWAG Dataset?

The SWAG (Situations With Adversarial Generations) dataset comprises of 113,000 multiple-choice questions covering a wide range of grounded scenarios and is taken from two consecutive video captions in ActivityNet Captions database, as well as the Large Scale Movie Description Challenge. This dataset is created using adversarial filtering. SWAG dataset can be used in the research toward commonsense NLI.

Download SWAG Dataset in Python

Instead of downloading the SWAG dataset in Python, you can effortlessly load it in Python via our open-source package Hub with just one line of code.

Load SWAG Dataset Training Subset in Python

1
import hub
2
ds = hub.load('hub://activeloop/swag-train')
Copied!

Load SWAG Dataset Testing Subset in Python

1
import hub
2
ds = hub.load('hub://activeloop/swag-test')
Copied!

Load SWAG Dataset Validation Subset in Python

1
import hub
2
ds = hub.load('hub://activeloop/swag-val')
Copied!

SWAG Dataset Structure

SWAG Data Fields

For the training and validation set
  • video_id : tensor that contain video id.
  • fold_ind: tensor that contain fold id.
  • start_phrase: tensor containing startphrase of the context.
  • gold_ending: tensor containing better ending.
  • distractor_0: tensor containing first distractor. The answer with first distractor is considered to have the best quality.
  • distractor_1: tensor containing second distractor.
  • distractor_2: tensor containing third distractor.
  • distractor_3: tensor containing fourth distractor. The answer with fourth distractor is considered to have the lowest quality.
  • gold_source: tensor containing labels gold and gen. gen indicates generated best answer and gold indicates real answer which is considered as the second best.
  • gold_type: label containing values 'pos' and 'unl'
  • distractor_0_type: label containing values 'pos' and 'unl'
  • distractor_1_type: label containing values 'pos' and 'unl'
  • distractor_2_type: label containing values 'pos' and 'unl'
  • distractor_3_type: label containing values 'n/a', 'pos' and 'unl'
  • sentence_1: tensor containing first sentence.
  • sentence_2: tensor containing second sentence.
For the test set
  • video_id : tensor that contain video id.
  • fold_ind: tensor that contain fold id.
  • start_phrase: tensor containing startphrase of the context.
  • gold_source: tensor containing labels gold and gen. gen indicates generated best answer and gold indicates real answer which is considered as the second best.
  • ending0: tensor containing first ending.
  • ending1: tensor containing second ending.
  • ending2: tensor containing third ending.
  • ending3: tensor containing fourth ending.
  • sentence_1: tensor containing first sentence.
  • sentence_2: tensor containing second sentence.

SWAG Data Splits

  • The SWAG dataset train set was composed of 73,000 multiple choice questions about grounded situations.
  • The SWAG dataset validation set was composed of 20,000 multiple choice questions about grounded situations .
  • The SWAG dataset test set was composed of 20,000 multiple choice questions about grounded situations for (blind) test.

How to use SWAG Dataset with PyTorch and TensorFlow in Python

Train a model on SWAG dataset with PyTorch in Python

Let's use Hub's built-in PyTorch one-line dataloader to connect the data to the compute:
1
dataloader = ds.pytorch(num_workers=0, batch_size=4, shuffle=False)
Copied!

Train a model on SWAG dataset with TensorFlow in Python

1
dataloader = ds.tensorflow()
Copied!

SWAG Dataset Creation

Source Data

Data Collection and Normalization Information
The dataset was created by taking two consecutive video captions from the ActivityNet Captions website and LSMD challenge. These datasets differ somewhat in nature and allow us to attain larger coverage. A constituency parser is utilized to break the second sentence into noun and verb phrases for every pair of captions. Each question has a gold manually-validated ending and three distractors.

Additional Information about SWAG Dataset

SWAG Dataset Description

SWAG Dataset Curators

Rowan Zellers, Yonatan Bisk, Roy Schwartz, Yejin Choi

SWAG Dataset Licensing Information

MIT Licence

SWAG Dataset Citation Information

1
@inproceedings{zellers2018swagaf,
2
title={SWAG: A Large-Scale Adversarial Dataset for Grounded Commonsense Inference},
3
author={Zellers, Rowan and Bisk, Yonatan and Schwartz, Roy and Choi, Yejin},
4
booktitle = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
5
year={2018}
6
}
Copied!

SWAG Dataset FAQs

What is the SWAG dataset for Python?
The SWAG dataset (Situations With Adversarial Generations) is made up of 113,000 multiple-choice questions about grounded situations. It is a large-scale dataset for the tasks of grounded commonsense inference, unifying natural language inference, and physically grounded reasoning.

What is the SWAG dataset used for?

The SWAG dataset is used to train NLP models that can handle multiple choice questions.

How to download the SWAG dataset in Python?

Load the SWAG dataset with one line of code using Activeloop Hub the open-source package made in Python. Check out detailed instructions on how to load the SWAG dataset training subset in Python, load the SWAG dataset testing subset in Python, and load the SWAG dataset validation subset in Python.
How can I use the SWAG dataset in PyTorch or TensorFlow?
You can train a model on SWAG dataset with PyTorch in Python or train a model on the SWAG dataset with TensorFlow in Python. You can stream the SWAG dataset while training a model in PyTorch or TensorFlow with one line of code using the open-source package Activeloop Hub that is written in Python.