Learning by Competition: Understanding Adversarial Neural Networks Part II
This is the second part of an essay focused on exploring the intricacies of adversarial neural networks in modern deep learning systems. Yesterday, we discussed some of the fundamental challenges that motivated the creation of adversarial neural networks as a mechanism to simulate human intuition in artificial intelligence(AI) agents. Additionally, we discussed the main distinctions between generative and discriminative classification systems as the two main schools of thought existent before the arrival of adversarial neural networks.
For decades the world of classification models was divided between generative and discriminative models. While both models have clear strengths, very often they result impractical in real world scenarios. In 2014, a group of deep learning lioneers led by Ian Goodfellow published a research paper that proposed the provocative idea of combining discriminative and generative techniques in a single adversarial model that could improve data classification in many high-dimensional scenarios. They called the new techniques Generative Adversarial Networks(GANs).
Many knowledge-centric processes in life are accelerated by the friction created between different, often competing, interests. From the dynamics in stock markets to trade relationships, adversarial processes are a driving force behind many major economic trends. A more natural example of the influence of adversarial processes in learning can be seen in the intuitive ways in which infants acquire knowledge by finding intuitive solutions to challenges.
Following some of those examples, the GAN researchers proposed a neural network formed by discriminator and generator models that compete in order to improve a classification process.
In the GAN model, the generator tries to generate data based on a probability distribution estimated from the training dataset. Mathematically speaking, a generator is itself a neural network that takes a input z from p(z), where z is a sample from probability distribution p(z). The generator then generates a data distribution that is fed to the discriminator network.
The other component of a GAN model is the discriminator which is also a neural network. The discriminator takes the data distribution created by the generator as well as inputs p(x) from the original training set and tries to solve a classification problem that determines whether the data is from the original training set or not.
As you can see, the magic of GAN models is based on the constant friction between the generator and discriminator network. In that architecture, the generator trying to maximize the probability of making the discriminator mistakes its inputs as real while the discriminator guiding the generator to produce more realistic data.
In some context you can evaluate GANs through the lenses of game theory and see the two networks as players in a game trying to maximize their outcome. If the game achieves a Nash Equilibrium, the generator would be able to capture the general training data distribution. As a result, the discriminator would be always unsure of whether its inputs are real or not.