Attention
RNN

Attention

Attention: Sequence to Sequence Model: Input sequence is provided and output sequence is derived from that input. Encoder and Decoder: The model encodes a particular input provided by us into something that we call as context vector that is passed to the decoder after the encoding which is then decoded by the help of the decoder. Now we can always use a big decoder i.e., the output from all the hidden states but then we have performance issues and the chances of overfitting.

  • Pinaki Pani
    profile_pic
DCGAN
GANs

DCGAN

Deep Convolutional GAN Implementing a Deep Convolutional GAN where we are trying to generate house numbers which are supposed to look as realistic as possible.The DCGAN architecture was first explored in 2016 and has seen impressive results in generating new images; you can read the original paper, here import matplotlib.pyplot as plt import numpy as np import pickle as pkl import torch from torchvision import datasets from torchvision import transforms import torch.

  • Pinaki Pani
    profile_pic
Generative Adversarial Networks
Deep Learning

Generative Adversarial Networks

Catch up with RNNs and key differences Now if we recall, then we could generalize that RNNs generate one word at a time similarly they also generate one pixel at a time for images. Whereas GANs help to generate a whole image in parallel. It uses a generator-discriminator Network model. The generator model takes random noise and runs it through a differentiable function to transform/reshape it to a more realistic image.

  • Pinaki Pani
    profile_pic