paint-brush
Generative AI Model: GANs (Part 2)by@jyotiyadav33111
146 reads

Generative AI Model: GANs (Part 2)

by Jyoti YadavMay 29th, 2024
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

In the [previous blog] we talked about the core concepts of GANs and the model components namely Generator and discriminator. This blog is going to focus on a very basic Loss function “Cross Entropy” Cross Entropy is a measure of how far away the predictions are from the actual results (in terms of probability) The higher the entropy, worse the predictions.
featured image - Generative AI Model: GANs (Part 2)
Jyoti Yadav HackerNoon profile picture

Welcome Folks!


In the previous blog, we talked about the core concepts of GANs and the model components, namely Generator and discriminator. It also described the basic adversarial process which is involved in the optimization of the model. GANs work on a very powerful but basic concept of game theory “ Zero Sum Game.” This simply means that the gain of one player in the game is a loss for the other so the total sums up to zero.


This is applied in this context in a way that if the generator wins, it implies that the discriminator is not able to identify fake images, and if the discriminator wins, the generator is not doing a good job.


To understand how the above adversarial training works and what are the components that help the generator and discriminator achieve their goals, this blog is going to focus on a very basic Loss function “Cross Entropy.”

What Is Cross Entropy?

Cross Entropy is a measure of how far away the predictions are from the actual results (in terms of probability). The higher the entropy, the worse the predictions.

Basic Concepts

Let’s take an example to understand how it works. The task is to identify if an email is spam or not. Cross entropy simply tells how good the guess is compared to the actual scenario. Let’s break it down:


Scenario 1: If an email is spam and the guess was that 90% are the chances that the email is spam. This is very close to the actual answer implying cross entropy is lower.


Scenario 2: If an email is spam and guess is of 10%. This is very far and thereby has a very high cross-entropy.

Formula

The formula for cross-entropy:


L = -(y*log(p) + (1-y)*log(1-p))



where,

  • L is the loss
  • y is the actual label (spam = 1; not spam = 0)
  • p is the probability of a positive class (probability of it being spam)


Continuing with the spam email example, given an email there could be two values of y:


Case 1:


y = 0 ; email is not a spam



  • Above scenario turns y*log(p) = 0


  • Implying L = - (1-y)*log(1-p)
    • (1-y) = 1
    • But if
      • p is close to 0 (good prediction) => log(1-p) is close to zero implying L close to zero


      • p close to 1 (bad prediction) => log(1-p) is becomes very negative and implying L is higher


Case 2:


y = 1 ; email is a spam


  • Above scenario turns (1-y)*log(1-p) = 0


  • Implying L = - y*log(p)
    • y = 1
    • But if
      • p is close to 1 (good prediction) => log(p) is close to zero implying L close to zero
      • p close to 0 (bad prediction) => log(p) is becomes very negative and implying L is higher


The above scenario explains how the cross entropy actually works in estimating the performance of a model. Wherever the prediction is close to the actual case, it becomes very low. When it is far from the actual label, it becomes very high.

Cross Entropy in GANs

As explained in the last blog, GANs consist of two components: Generator and Discriminator. Depending upon the target for these two functions, the loss function varies for each.

Discriminator:

The role of the discriminator is to distinguish between real and fake images, therefore, both real and fake probabilities matter. Since it receives both real and fake images, the loss functions contain both parts of the loss. Whereas, in binary explained, it is all about one email and whether it is a scam or not a scam.


Therefore, the loss function looks like the following:


L = - (log(D(real)) + log(1-D(fake)))


Let’s break it down:

  • Real images: D(real) is the probability of it being real


  • Fake images: D(fake) is the probability of it being Fake

Generator

The generator’s task is to generate images that Discriminator classifies as real. Therefore, it tries to minimize the probability that the discriminator correctly identifies the fake images. Therefore, the cross entropy is:


L = - log(D(Fake))


Where,

  • D(Fake) is the probability of the Discriminator identifying correctly fake images


Cross-entropy is a fundamental concept in machine learning (not limited to the GANs). It provides a clear, interpretable measure of the difference between predicted and true distributions, thereby providing direction to the model in improved performance. By mastering cross-entropy and its applications, you can better navigate the complexities of machine learning and leverage GANs to their full potential.


The section ends here, stay tuned for the next post!