Understanding Representation Learning Paradigms with Applications to Low Resource Text Classification
Loading...
Date
Authors
Garg, Siddhant
Advisors
License
DOI
Type
Technical Report
Journal Title
Journal ISSN
Volume Title
Publisher
Grantor
Abstract
A crucial component of modern machine learning systems is learning input representations which can be used for prediction tasks. The expensive cost of labelling and easy availability of unlabelled data has led to the popularity of representation learning techniques on unlabelled data. In thesis we present two ideas in the domain of representation learning. Firstly, we show that self-supervised representation learning approaches like variational auto-encoders and masked self-supervision can be viewed as imposing a regularization on the representation via a learnable function. We present a discriminative theoretical framework for analysing the underlying assumptions and sample complexities of representation learning via such functional regularizations. Our results show that functional regularization on unlabelled data can prune the hypothesis space and reduce the sample complexity of labelled data. We then consider the domain of NLP where fine-tuning pre-trained sentence embedding models like BERT has become the default transfer learning approach. We propose an alternative transfer learning approach called SimpleTran for low resource text classification characterized by small sized datasets. We train a simple sentence embedding model on the target dataset, combine its output embedding with that of the pre-trained model via concatenation or dimension reduction, and finally train a classifier on the combined embedding either by fixing the embedding model weights or training the classifier and the embedding models end-to-end. With end-to-end training, SimpleTran outperforms fine-tuning on small and medium sized datasets with negligible computational overhead. We provide theoretical analysis for our method, identifying conditions under which it has advantages.