The method consists of a “EEG-in/image-out” processingpipeline, where EEG signals given to a Generator are converted to Realistic images.
Images Generated using Conditional GAN
Generative Adversarial Networks (GAN) are used to generate data similar to that of the training dataset distribution from scratch. In our case, creating images similar to that of our training dataset.
GAN consists of two networks — Generator and Discriminator
If we are using ordinary GAN, we can generate realistic images from noise. Here, no condition is provided as input to the Generator.
In our case, the Generator needs to generate a realistic image given it’s corresponding EEG signal (Condition).
Example of a Conditional GAN
The difference between GAN and Conditional GAN lies in the additional parameter y in both a discriminator and generator function.
Overview of the architecture design of the proposed EEG-driven image generation approach
The generator network is trained in a conditional GAN framework toproduce images from EEG features, so that the generated images matches that of the conditioning vector (EEG features).
Conditioning Vector?
EEG feature encoder architecture.
Raw EEG signals are processed by an RNN-based encoder, which is trained to output a vector of what we call EEG features, containing visually-relevant and class-discriminative information extracted from the input signals.
A discriminative model then checks whether the outputs of the generator are plausible and also match the conditioning criteria.
Instead of training the discriminator with real images with correct conditions and fake images with arbitrary conditions (which forces the discriminator to learn how to distinguish between real images with correct conditions and real images with wrong conditions without any explicit supervision), we also provide a wrong sample consisting of a real image with a wrong condition, randomly chosen as the representative EEG feature vector from a different class.