Member-only story
Artificial Intelligence
OpenAI Glow and the Art of Learning from Small Datasets
A generative model that is able to master complex tasks using small training datasets.

I recently started an AI-focused educational newsletter, that already has over 100,000 subscribers. TheSequence is a no-BS (meaning no hype, no news etc) ML-oriented newsletter that takes 5 minutes to read. The goal is to keep you up to date with machine learning projects, research papers and concepts. Please give it a try by subscribing below:
Since the early days of machine learning, artificial intelligence scenarios have faced two big challenges in order to experience mainstream adoption. First, we have the data efficiency problem that requires machine or deep learning models to be trained using large and accurate datasets which, as we know, are really expensive to build and maintain. Secondly, we have the generalization problem which AI agents face in order to build new knowledge that is different from the training data. Humans, by contrast, are incredibly efficient in learning with minimum supervision and rapidly generalizing knowledge from a few data examples.
Generative models are one of the deep learning disciplines that focuses on addressing the two challenges mentioned above. Conceptually, generative models are focused on observing an initial dataset, like a set of pictures, and trying to learn how the data was generated. Using more mathematical terms, generative models try to infer all dependencies within very high-dimensional input data, usually specified in the form of a full joint probability distribution. Entire deep learning areas such as speech synthesis or semi-supervised learning are based on generative models. Recently, generative models such as generative adversarial networks(GANs) have become extremely popular within the deep learning community. A couple of years ago, OpenAI experimented with a not-very well-known technique called Flow-Based Generative Models in order to improve over…