Entailment Detection by Fine-tuning BERT
The model in this repository is fine-tuned on Google's encoder-decoder transformer-based model BERT.
New York University's Multi-NLI dataset is used for fine-tuning.
Accuracy achieved: ~74%

Notebook used for fine-tuning: here
N.B.: Due to computational resource constraints, only 11K samples are used for fine-tuning. There is room for accuracy improvement if a model is trained on all the 390K samples available in the dataset.