We don't need no labels: the future of pretraining and self-supervised learning
Telling a cat from a bird? that's easy, most infants can do that. But how about learning to paint a black and white photo with real color? It's time for your models to grow up. We find that transfer learning from different datasets and tasks saves a lot of time and money when labels are scarce and data is limited. In this lecture, I review self-supervised methods that are used to pretrain models on unlabeled data. Methods from the fields of Vision, Audio and NLP will be refined so they are applicable with other domains and effective on your data.