Continuous sentence representation and generation

Full Featured (30 min.)

Variational autoencoders (VAE) are state-of-the-art generative models, allowing to build better language models with more coherent and diverse results. I will explain what are VAEs, and review recent papers applying them for continuous sentence representation (namely 1511.06349, 1702.02390). I will also discuss my recent research results on combining VAE's with sequence-to-sequence models and of course read some computer generated poetry.