Face Generation

In this project, you'll define and train a DCGAN on a dataset of faces. Your goal is to get a generator network to generate new images of faces that look as realistic as possible!

The project will be broken down into a series of tasks from loading in data to defining and training adversarial networks. At the end of the notebook, you'll be able to visualize the results of your trained Generator to see how it performs; your generated samples should look like fairly realistic faces with small amounts of noise.

Get the Data

You'll be using the CelebFaces Attributes Dataset (CelebA) to train your adversarial networks.

This dataset is more complex than the number datasets (like MNIST or SVHN) you've been working with, and so, you should prepare to define deeper networks and train them for a longer time to get good results. It is suggested that you utilize a GPU for training.

Pre-processed Data

Since the project's main focus is on building the GANs, we've done some of the pre-processing for you. Each of the CelebA images has been cropped to remove parts of the image that don't include a face, then resized down to 64x64x3 NumPy images. Some sample data is show below.

If you are working locally, you can download this data by clicking here

This is a zip file that you'll need to extract in the home directory of this notebook for further loading and processing. After extracting the data, you should be left with a directory of data processed_celeba_small/

In [1]:
# can comment out after executing
#!unzip processed_celeba_small.zip
In [2]:
data_dir = 'processed_celeba_small/'

"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import pickle as pkl
import matplotlib.pyplot as plt
import numpy as np
import problem_unittests as tests
#import helper

%matplotlib inline

Visualize the CelebA Data

The CelebA dataset contains over 200,000 celebrity images with annotations. Since you're going to be generating faces, you won't need the annotations, you'll only need the images. Note that these are color images with 3 color channels (RGB)#RGB_Images) each.

Pre-process and Load the Data

Since the project's main focus is on building the GANs, we've done some of the pre-processing for you. Each of the CelebA images has been cropped to remove parts of the image that don't include a face, then resized down to 64x64x3 NumPy images. This pre-processed dataset is a smaller subset of the very large CelebA data.

There are a few other steps that you'll need to transform this data and create a DataLoader.

Exercise: Complete the following get_dataloader function, such that it satisfies these requirements:

  • Your images should be square, Tensor images of size image_size x image_size in the x and y dimension.
  • Your function should return a DataLoader that shuffles and batches these Tensor images.

ImageFolder

To create a dataset given a directory of images, it's recommended that you use PyTorch's ImageFolder wrapper, with a root directory processed_celeba_small/ and data transformation passed in.

In [3]:
# necessary imports
import torch
from torchvision import datasets
from torchvision import transforms
In [4]:
def get_dataloader(batch_size, image_size, data_dir='processed_celeba_small/'):
    """
    Batch the neural network data using DataLoader
    :param batch_size: The size of each batch; the number of images in a batch
    :param img_size: The square size of the image data (x, y)
    :param data_dir: Directory where image data is located
    :return: DataLoader with batched data
    """
    transform = transforms.Compose([
                    transforms.Resize(image_size),
                    transforms.ToTensor()
                ])
    my_dataset = datasets.ImageFolder(data_dir, 
                                      transform = transform)
    
    # TODO: Implement function and return a dataloader
    data_loader = torch.utils.data.DataLoader(dataset = my_dataset,
                                               batch_size = batch_size,
                                               shuffle = True)
    
    return data_loader

Create a DataLoader

Exercise: Create a DataLoader celeba_train_loader with appropriate hyperparameters.

Call the above function and create a dataloader to view images.

  • You can decide on any reasonable batch_size parameter
  • Your image_size must be 32. Resizing the data to a smaller size will make for faster training, while still creating convincing images of faces!
In [5]:
# Define function hyperparameters
batch_size = 128
img_size = 32

"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
# Call your function and get a dataloader
celeba_train_loader = get_dataloader(batch_size, img_size)

Next, you can view some images! You should seen square images of somewhat-centered faces.

Note: You'll need to convert the Tensor images into a NumPy type and transpose the dimensions to correctly display an image, suggested imshow code is below, but it may not be perfect.

In [6]:
# helper display function
def imshow(img):
    npimg = img.numpy()
    plt.imshow(np.transpose(npimg, (1, 2, 0)))

"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
# obtain one batch of training images
dataiter = iter(celeba_train_loader)
images, _ = dataiter.next() # _ for no labels

# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(20, 4))
plot_size=20
for idx in np.arange(plot_size):
    ax = fig.add_subplot(2, plot_size/2, idx+1, xticks=[], yticks=[])
    imshow(images[idx])

Exercise: Pre-process your image data and scale it to a pixel range of -1 to 1

You need to do a bit of pre-processing; you know that the output of a tanh activated generator will contain pixel values in a range from -1 to 1, and so, we need to rescale our training images to a range of -1 to 1. (Right now, they are in a range from 0-1.)

In [7]:
# TODO: Complete the scale function
def scale(x, feature_range=(-1, 1)):
    ''' Scale takes in an image x and returns that image, scaled
       with a feature_range of pixel values from -1 to 1. 
       This function assumes that the input x is already scaled from 0-1.'''
    # assume x is scaled to (0, 1)
    # scale to feature_range and return scaled x
    min, max = feature_range
    x = x * (max - min) + min
    
    return x
In [8]:
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
# check scaled range
# should be close to -1 to 1
img = images[0]
scaled_img = scale(img)

print('Min: ', scaled_img.min())
print('Max: ', scaled_img.max())
Min:  tensor(-0.8980)
Max:  tensor(0.8196)

Define the Model

A GAN is comprised of two adversarial networks, a discriminator and a generator.

Discriminator

Your first task will be to define the discriminator. This is a convolutional classifier like you've built before, only without any maxpooling layers. To deal with this complex data, it's suggested you use a deep network with normalization. You are also allowed to create any helper functions that may be useful.

Exercise: Complete the Discriminator class

  • The inputs to the discriminator are 32x32x3 tensor images
  • The output should be a single value that will indicate whether a given image is real or fake
In [9]:
import torch.nn as nn
import torch.nn.functional as F
In [10]:
import torch.nn as nn
import torch.nn.functional as F

# helper conv function
def conv(in_channels, out_channels, kernel_size, stride=2, padding=1, batch_norm=True):
    """Creates a convolutional layer, with optional batch normalization.
    """
    layers = []
    conv_layer = nn.Conv2d(in_channels, out_channels, 
                           kernel_size, stride, padding, bias=False)
    
    # append conv layer
    layers.append(conv_layer)

    if batch_norm:
        # append batchnorm layer
        layers.append(nn.BatchNorm2d(out_channels))
     
    # using Sequential container
    return nn.Sequential(*layers)


class Discriminator(nn.Module):

    def __init__(self, conv_dim):
        """
        Initialize the Discriminator Module
        :param conv_dim: The depth of the first convolutional layer
        """
        super(Discriminator, self).__init__()

        # complete init function
        self.conv_dim = conv_dim
        # 32x32 input
        self.conv1 = conv(3, conv_dim, 4, batch_norm=False) # first layer, no batch_norm
        # 16x16 out
        self.conv2 = conv(conv_dim, conv_dim*2, 4)
        # 8x8 out
        self.conv3 = conv(conv_dim*2, conv_dim*4, 4)
        # 4x4 out
        # final, fully-connected layer
        self.fc = nn.Linear(conv_dim*4*4*4, 1)

    def forward(self, x):
        """
        Forward propagation of the neural network
        :param x: The input to the neural network     
        :return: Discriminator logits; the output of the neural network
        """
        # define feedforward behavior
        output = F.leaky_relu(self.conv1(x), 0.2)
        output = F.leaky_relu(self.conv2(output), 0.2)
        output = F.leaky_relu(self.conv3(output), 0.2)
        
        # flatten
        output = output.view(-1, self.conv_dim*4*4*4)
        
        # final output layer
        output = self.fc(output)        
        return output


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_discriminator(Discriminator)
Tests Passed

Generator

The generator should upsample an input and generate a new image of the same size as our training data 32x32x3. This should be mostly transpose convolutional layers with normalization applied to the outputs.

Exercise: Complete the Generator class

  • The inputs to the generator are vectors of some length z_size
  • The output should be a image of shape 32x32x3
In [11]:
# helper deconv function
def deconv(in_channels, out_channels, kernel_size, stride=2, padding=1, batch_norm=True):
    """Creates a transposed-convolutional layer, with optional batch normalization.
    """
    # create a sequence of transpose + optional batch norm layers
    layers = []
    transpose_conv_layer = nn.ConvTranspose2d(in_channels, out_channels, 
                                              kernel_size, stride, padding, bias=False)
    # append transpose convolutional layer
    layers.append(transpose_conv_layer)
    
    if batch_norm:
        # append batchnorm layer
        layers.append(nn.BatchNorm2d(out_channels))
        
    return nn.Sequential(*layers)


class Generator(nn.Module):
    
    def __init__(self, z_size, conv_dim):
        """
        Initialize the Generator Module
        :param z_size: The length of the input latent vector, z
        :param conv_dim: The depth of the inputs to the *last* transpose convolutional layer
        """
        super(Generator, self).__init__()

        # complete init function
        self.conv_dim = conv_dim
        
        # first, fully-connected layer
        self.fc = nn.Linear(z_size, conv_dim*4*4*4)

        # transpose conv layers
        self.t_conv1 = deconv(conv_dim*4, conv_dim*2, 4)
        self.t_conv2 = deconv(conv_dim*2, conv_dim, 4)
        self.t_conv3 = deconv(conv_dim, 3, 4, batch_norm=False)


    def forward(self, x):
        """
        Forward propagation of the neural network
        :param x: The input to the neural network     
        :return: A 32x32x3 Tensor image as output
        """
        # define feedforward behavior
        # fully-connected + reshape 
        output = self.fc(x)
        output = output.view(-1, self.conv_dim*4, 4, 4) # (batch_size, depth, 4, 4)
        
        # hidden transpose conv layers + relu
        output = F.relu(self.t_conv1(output))
        output = F.relu(self.t_conv2(output))
        
        # last layer + tanh activation
        output = self.t_conv3(output)
        output = F.tanh(output)
        
        return output

"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_generator(Generator)
C:\Users\skysign\Anaconda3\lib\site-packages\torch\nn\functional.py:1320: UserWarning: nn.functional.tanh is deprecated. Use torch.tanh instead.
  warnings.warn("nn.functional.tanh is deprecated. Use torch.tanh instead.")
Tests Passed

Initialize the weights of your networks

To help your models converge, you should initialize the weights of the convolutional and linear layers in your model. From reading the original DCGAN paper, they say:

All weights were initialized from a zero-centered Normal distribution with standard deviation 0.02.

So, your next task will be to define a weight initialization function that does just this!

You can refer back to the lesson on weight initialization or even consult existing model code, such as that from the networks.py file in CycleGAN Github repository to help you complete this function.

Exercise: Complete the weight initialization function

  • This should initialize only convolutional and linear layers
  • Initialize the weights to a normal distribution, centered around 0, with a standard deviation of 0.02.
  • The bias terms, if they exist, may be left alone or set to 0.
In [12]:
import torch.nn as nn

def weights_init_normal(m):
    """
    Applies initial weights to certain layers in a model .
    The weights are taken from a normal distribution 
    with mean = 0, std dev = 0.02.
    :param m: A module or layer in a network    
    """
    # classname will be something like:
    # `Conv`, `BatchNorm2d`, `Linear`, etc.
    classname = m.__class__.__name__
    
    # TODO: Apply initial weights to convolutional and linear layers
    if hasattr(m, 'weight') and (classname.find('Conv') != -1 or classname.find('Linear') != -1):
        nn.init.normal(m.weight.data, 0.0, 0.2)
        
    if hasattr(m, 'bias') and m.bias is not None:
        nn.init.constant(m.bias.data, 0.0)
    

Build complete network

Define your models' hyperparameters and instantiate the discriminator and generator from the classes defined above. Make sure you've passed in the correct input arguments.

In [13]:
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
def build_network(d_conv_dim, g_conv_dim, z_size):
    # define discriminator and generator
    D = Discriminator(d_conv_dim)
    G = Generator(z_size=z_size, conv_dim=g_conv_dim)

    # initialize model weights
    D.apply(weights_init_normal)
    G.apply(weights_init_normal)

    print(D)
    print()
    print(G)
    
    return D, G

Exercise: Define model hyperparameters

In [14]:
# Define model hyperparams
d_conv_dim = 32
g_conv_dim = 32
z_size = 100

"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
D, G = build_network(d_conv_dim, g_conv_dim, z_size)
C:\Users\skysign\Anaconda3\lib\site-packages\ipykernel_launcher.py:16: UserWarning: nn.init.normal is now deprecated in favor of nn.init.normal_.
  app.launch_new_instance()
C:\Users\skysign\Anaconda3\lib\site-packages\ipykernel_launcher.py:19: UserWarning: nn.init.constant is now deprecated in favor of nn.init.constant_.
Discriminator(
  (conv1): Sequential(
    (0): Conv2d(3, 32, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
  )
  (conv2): Sequential(
    (0): Conv2d(32, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
    (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  )
  (conv3): Sequential(
    (0): Conv2d(64, 128, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
    (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  )
  (fc): Linear(in_features=2048, out_features=1, bias=True)
)

Generator(
  (fc): Linear(in_features=100, out_features=2048, bias=True)
  (t_conv1): Sequential(
    (0): ConvTranspose2d(128, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
    (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  )
  (t_conv2): Sequential(
    (0): ConvTranspose2d(64, 32, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
    (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  )
  (t_conv3): Sequential(
    (0): ConvTranspose2d(32, 3, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
  )
)

Training on GPU

Check if you can train on GPU. Here, we'll set this as a boolean variable train_on_gpu. Later, you'll be responsible for making sure that

  • Models,
  • Model inputs, and
  • Loss function arguments

Are moved to GPU, where appropriate.

In [15]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch

# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
    print('No GPU found. Please use a GPU to train your neural network.')
else:
    print('Training on GPU!')
Training on GPU!

Discriminator and Generator Losses

Now we need to calculate the losses for both types of adversarial networks.

Discriminator Losses

  • For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_real_loss + d_fake_loss.
  • Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that.

Generator Loss

The generator loss will look similar only with flipped labels. The generator's goal is to get the discriminator to think its generated images are real.

Exercise: Complete real and fake loss functions

You may choose to use either cross entropy or a least squares error loss to complete the following real_loss and fake_loss functions.

In [16]:
def real_loss(D_out):
    '''Calculates how close discriminator outputs are to being real.
       param, D_out: discriminator logits
       return: real loss'''
    batch_size = D_out.size(0)
    labels = torch.ones(batch_size)
    
    if train_on_gpu:
        labels = labels.cuda()

    criterion = nn.BCEWithLogitsLoss()
    loss = criterion(D_out.squeeze(), labels)

    return loss

def fake_loss(D_out):
    '''Calculates how close discriminator outputs are to being fake.
       param, D_out: discriminator logits
       return: fake loss'''
    batch_size = D_out.size(0)
    labels = torch.zeros(batch_size)
    
    if train_on_gpu:
        labels = labels.cuda()
        
    criterion = nn.BCEWithLogitsLoss()
    loss = criterion(D_out.squeeze(), labels)
    
    return loss

Optimizers

Exercise: Define optimizers for your Discriminator (D) and Generator (G)

Define optimizers for your models with appropriate hyperparameters.

In [17]:
import torch.optim as optim

# params
lr = 0.0002
beta1=0.5
beta2=0.999 # default value

# Create optimizers for the discriminator D and generator G
d_optimizer = optim.Adam(D.parameters(), lr, [beta1, beta2])
g_optimizer = optim.Adam(G.parameters(), lr, [beta1, beta2])

Training

Training will involve alternating between training the discriminator and the generator. You'll use your functions real_loss and fake_loss to help you calculate the discriminator losses.

  • You should train the discriminator by alternating on real and fake images
  • Then the generator, which tries to trick the discriminator and should have an opposing loss function

Saving Samples

You've been given some code to print out some loss statistics and save some generated "fake" samples.

Exercise: Complete the training function

Keep in mind that, if you've moved your models to GPU, you'll also have to move any model inputs to GPU.

In [20]:
def train(D, G, n_epochs, print_every=50):
    '''Trains adversarial networks for some number of epochs
       param, D: the discriminator network
       param, G: the generator network
       param, n_epochs: number of epochs to train for
       param, print_every: when to print and record the models' losses
       return: D and G losses'''
    
    # move models to GPU
    if train_on_gpu:
        D.cuda()
        G.cuda()

    # keep track of loss and generated, "fake" samples
    samples = []
    losses = []

    # Get some fixed data for sampling. These are images that are held
    # constant throughout training, and allow us to inspect the model's performance
    sample_size=16
    fixed_z = np.random.uniform(-1, 1, size=(sample_size, z_size))
    fixed_z = torch.from_numpy(fixed_z).float()
    # move z to GPU if available
    if train_on_gpu:
        fixed_z = fixed_z.cuda()

    # epoch training loop
    for epoch in range(n_epochs):

        # batch training loop
        for batch_i, (real_images, _) in enumerate(celeba_train_loader):

            batch_size = real_images.size(0)
            real_images = scale(real_images)

            # ===============================================
            #         YOUR CODE HERE: TRAIN THE NETWORKS
            # ===============================================
            
            # 1. Train the discriminator on real and fake images
            d_optimizer.zero_grad()
            
            # Compute the discriminator losses on real images 
            if train_on_gpu:
                real_images = real_images.cuda()
                
            D_real = D(real_images)
            d_real_loss = real_loss(D_real)

            # Generate fake images
            z = np.random.uniform(-1, 1, size=(batch_size, z_size))
            z = torch.from_numpy(z).float()

            # move x to GPU, if available
            if train_on_gpu:
                z = z.cuda()
                
            fake_images = G(z)
            
            D_fake = D(fake_images)
            d_fake_loss = fake_loss(D_fake)
            # add up loss and perform backprop
            d_loss = d_real_loss + d_fake_loss
            d_loss.backward(retain_graph = True)
            d_optimizer.step()            
            
            # 2. Train the generator with an adversarial loss
            g_optimizer.zero_grad()
            
            # Compute the discriminator losses on fake images 
            # using flipped labels!
            D_fake = D(fake_images)
            g_loss = real_loss(D_fake) # use real loss to flip labels
        
            # perform backprop
            g_loss.backward()
            g_optimizer.step()
                        
            # ===============================================
            #              END OF YOUR CODE
            # ===============================================

            # Print some loss stats
            if batch_i % print_every == 0:
                # append discriminator loss and generator loss
                losses.append((d_loss.item(), g_loss.item()))
                # print discriminator and generator loss
                print('Epoch [{:5d}/{:5d}] | d_loss: {:6.4f} | g_loss: {:6.4f}'.format(
                        epoch+1, n_epochs, d_loss.item(), g_loss.item()))


        ## AFTER EACH EPOCH##    
        # this code assumes your generator is named G, feel free to change the name
        # generate and save sample, fake images
        G.eval() # for generating samples
        samples_z = G(fixed_z)
        samples.append(samples_z)
        G.train() # back to training mode

    # Save training generator samples
    with open('train_samples.pkl', 'wb') as f:
        pkl.dump(samples, f)
    
    # finally return losses
    return losses

Set your number of training epochs and train your GAN!

In [21]:
# set number of epochs 
n_epochs = 8


"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# call training function
losses = train(D, G, n_epochs=n_epochs)
C:\Users\skysign\Anaconda3\lib\site-packages\torch\nn\functional.py:1320: UserWarning: nn.functional.tanh is deprecated. Use torch.tanh instead.
  warnings.warn("nn.functional.tanh is deprecated. Use torch.tanh instead.")
Epoch [    1/    8] | d_loss: 3.2875 | g_loss: 1.3376
Epoch [    1/    8] | d_loss: 0.7715 | g_loss: 3.0740
Epoch [    1/    8] | d_loss: 0.3631 | g_loss: 3.7655
Epoch [    1/    8] | d_loss: 0.2972 | g_loss: 4.2046
Epoch [    1/    8] | d_loss: 0.1956 | g_loss: 4.4791
Epoch [    1/    8] | d_loss: 0.1856 | g_loss: 5.0066
Epoch [    1/    8] | d_loss: 0.2565 | g_loss: 4.6779
Epoch [    1/    8] | d_loss: 0.1646 | g_loss: 5.0501
Epoch [    1/    8] | d_loss: 0.1326 | g_loss: 5.0729
Epoch [    1/    8] | d_loss: 0.0425 | g_loss: 5.4347
Epoch [    1/    8] | d_loss: 0.0997 | g_loss: 5.2102
Epoch [    1/    8] | d_loss: 0.0619 | g_loss: 4.9860
Epoch [    1/    8] | d_loss: 0.1206 | g_loss: 5.2489
Epoch [    1/    8] | d_loss: 0.0759 | g_loss: 4.9238
Epoch [    1/    8] | d_loss: 0.0844 | g_loss: 5.4246
Epoch [    2/    8] | d_loss: 0.0915 | g_loss: 4.7305
Epoch [    2/    8] | d_loss: 0.0682 | g_loss: 5.2975
Epoch [    2/    8] | d_loss: 0.0326 | g_loss: 5.4568
Epoch [    2/    8] | d_loss: 0.0435 | g_loss: 5.6143
Epoch [    2/    8] | d_loss: 0.0780 | g_loss: 5.0540
Epoch [    2/    8] | d_loss: 0.0760 | g_loss: 4.6559
Epoch [    2/    8] | d_loss: 0.0846 | g_loss: 4.8904
Epoch [    2/    8] | d_loss: 0.0516 | g_loss: 5.7631
Epoch [    2/    8] | d_loss: 0.0790 | g_loss: 4.8742
Epoch [    2/    8] | d_loss: 0.1196 | g_loss: 4.5144
Epoch [    2/    8] | d_loss: 0.1208 | g_loss: 4.2226
Epoch [    2/    8] | d_loss: 0.1767 | g_loss: 4.2728
Epoch [    2/    8] | d_loss: 0.1301 | g_loss: 4.6089
Epoch [    2/    8] | d_loss: 0.1325 | g_loss: 4.6750
Epoch [    2/    8] | d_loss: 0.1422 | g_loss: 4.1352
Epoch [    3/    8] | d_loss: 0.1200 | g_loss: 4.2380
Epoch [    3/    8] | d_loss: 0.1556 | g_loss: 3.8902
Epoch [    3/    8] | d_loss: 0.1487 | g_loss: 4.1887
Epoch [    3/    8] | d_loss: 0.1220 | g_loss: 4.3654
Epoch [    3/    8] | d_loss: 0.1004 | g_loss: 5.1633
Epoch [    3/    8] | d_loss: 0.0110 | g_loss: 7.6736
Epoch [    3/    8] | d_loss: 0.0206 | g_loss: 6.8916
Epoch [    3/    8] | d_loss: 0.0611 | g_loss: 4.8264
Epoch [    3/    8] | d_loss: 0.2372 | g_loss: 3.0760
Epoch [    3/    8] | d_loss: 0.0669 | g_loss: 5.0533
Epoch [    3/    8] | d_loss: 0.2028 | g_loss: 3.9988
Epoch [    3/    8] | d_loss: 0.1208 | g_loss: 3.8431
Epoch [    3/    8] | d_loss: 0.2134 | g_loss: 3.5469
Epoch [    3/    8] | d_loss: 0.1485 | g_loss: 4.4288
Epoch [    3/    8] | d_loss: 0.1718 | g_loss: 3.5330
Epoch [    4/    8] | d_loss: 0.3577 | g_loss: 3.2054
Epoch [    4/    8] | d_loss: 0.3122 | g_loss: 2.6166
Epoch [    4/    8] | d_loss: 0.2278 | g_loss: 4.1190
Epoch [    4/    8] | d_loss: 0.1093 | g_loss: 4.1648
Epoch [    4/    8] | d_loss: 0.2462 | g_loss: 4.6163
Epoch [    4/    8] | d_loss: 0.2279 | g_loss: 3.1308
Epoch [    4/    8] | d_loss: 0.2227 | g_loss: 3.3676
Epoch [    4/    8] | d_loss: 0.2385 | g_loss: 3.4835
Epoch [    4/    8] | d_loss: 0.2245 | g_loss: 3.6040
Epoch [    4/    8] | d_loss: 0.3692 | g_loss: 2.6942
Epoch [    4/    8] | d_loss: 0.1581 | g_loss: 3.3896
Epoch [    4/    8] | d_loss: 0.2411 | g_loss: 3.4255
Epoch [    4/    8] | d_loss: 0.4835 | g_loss: 2.7226
Epoch [    4/    8] | d_loss: 0.6169 | g_loss: 2.1776
Epoch [    4/    8] | d_loss: 0.1536 | g_loss: 3.3916
Epoch [    5/    8] | d_loss: 0.3405 | g_loss: 3.4163
Epoch [    5/    8] | d_loss: 0.2639 | g_loss: 3.0556
Epoch [    5/    8] | d_loss: 0.3722 | g_loss: 2.6704
Epoch [    5/    8] | d_loss: 0.4478 | g_loss: 2.8219
Epoch [    5/    8] | d_loss: 0.5190 | g_loss: 2.3001
Epoch [    5/    8] | d_loss: 0.3126 | g_loss: 3.3380
Epoch [    5/    8] | d_loss: 0.2651 | g_loss: 4.6843
Epoch [    5/    8] | d_loss: 0.2931 | g_loss: 3.2488
Epoch [    5/    8] | d_loss: 0.4074 | g_loss: 2.4438
Epoch [    5/    8] | d_loss: 0.3637 | g_loss: 3.4293
Epoch [    5/    8] | d_loss: 0.4798 | g_loss: 3.8362
Epoch [    5/    8] | d_loss: 0.4563 | g_loss: 2.7693
Epoch [    5/    8] | d_loss: 0.3917 | g_loss: 3.3263
Epoch [    5/    8] | d_loss: 0.4556 | g_loss: 2.8292
Epoch [    5/    8] | d_loss: 0.3888 | g_loss: 2.8459
Epoch [    6/    8] | d_loss: 0.3811 | g_loss: 2.7419
Epoch [    6/    8] | d_loss: 0.6790 | g_loss: 2.6444
Epoch [    6/    8] | d_loss: 0.2918 | g_loss: 3.1802
Epoch [    6/    8] | d_loss: 0.5527 | g_loss: 2.1263
Epoch [    6/    8] | d_loss: 0.5435 | g_loss: 2.0893
Epoch [    6/    8] | d_loss: 0.5795 | g_loss: 2.5601
Epoch [    6/    8] | d_loss: 0.4128 | g_loss: 3.0908
Epoch [    6/    8] | d_loss: 0.6479 | g_loss: 2.3037
Epoch [    6/    8] | d_loss: 0.8894 | g_loss: 1.4030
Epoch [    6/    8] | d_loss: 0.6365 | g_loss: 2.2575
Epoch [    6/    8] | d_loss: 0.5033 | g_loss: 2.4867
Epoch [    6/    8] | d_loss: 0.6394 | g_loss: 2.0161
Epoch [    6/    8] | d_loss: 0.6131 | g_loss: 1.9521
Epoch [    6/    8] | d_loss: 0.7121 | g_loss: 1.8790
Epoch [    6/    8] | d_loss: 0.7589 | g_loss: 1.9087
Epoch [    7/    8] | d_loss: 0.9396 | g_loss: 1.5144
Epoch [    7/    8] | d_loss: 0.7633 | g_loss: 1.5132
Epoch [    7/    8] | d_loss: 0.6344 | g_loss: 2.0850
Epoch [    7/    8] | d_loss: 0.8521 | g_loss: 1.5228
Epoch [    7/    8] | d_loss: 0.6527 | g_loss: 1.8520
Epoch [    7/    8] | d_loss: 0.7556 | g_loss: 1.0697
Epoch [    7/    8] | d_loss: 0.7727 | g_loss: 1.4434
Epoch [    7/    8] | d_loss: 0.4713 | g_loss: 2.1962
Epoch [    7/    8] | d_loss: 0.7191 | g_loss: 1.9797
Epoch [    7/    8] | d_loss: 0.8735 | g_loss: 1.5521
Epoch [    7/    8] | d_loss: 0.7531 | g_loss: 1.7141
Epoch [    7/    8] | d_loss: 0.8218 | g_loss: 1.4110
Epoch [    7/    8] | d_loss: 0.7810 | g_loss: 1.7204
Epoch [    7/    8] | d_loss: 0.6807 | g_loss: 1.8236
Epoch [    7/    8] | d_loss: 0.8199 | g_loss: 1.8146
Epoch [    8/    8] | d_loss: 0.6693 | g_loss: 1.8831
Epoch [    8/    8] | d_loss: 0.8758 | g_loss: 2.0153
Epoch [    8/    8] | d_loss: 0.5644 | g_loss: 2.1353
Epoch [    8/    8] | d_loss: 0.9255 | g_loss: 1.5365
Epoch [    8/    8] | d_loss: 0.9667 | g_loss: 1.4722
Epoch [    8/    8] | d_loss: 0.9906 | g_loss: 1.3410
Epoch [    8/    8] | d_loss: 0.7909 | g_loss: 1.6456
Epoch [    8/    8] | d_loss: 0.8876 | g_loss: 1.6077
Epoch [    8/    8] | d_loss: 0.8326 | g_loss: 1.6010
Epoch [    8/    8] | d_loss: 0.8427 | g_loss: 1.8801
Epoch [    8/    8] | d_loss: 1.0323 | g_loss: 1.4147
Epoch [    8/    8] | d_loss: 0.8031 | g_loss: 1.7515
Epoch [    8/    8] | d_loss: 0.6609 | g_loss: 1.8978
Epoch [    8/    8] | d_loss: 0.8006 | g_loss: 1.9260
Epoch [    8/    8] | d_loss: 0.8022 | g_loss: 1.4995

Training loss

Plot the training losses for the generator and discriminator, recorded after each epoch.

In [22]:
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator', alpha=0.5)
plt.plot(losses.T[1], label='Generator', alpha=0.5)
plt.title("Training Losses")
plt.legend()
Out[22]:
<matplotlib.legend.Legend at 0x18a064c1978>

Generator samples from training

View samples of images from the generator, and answer a question about the strengths and weaknesses of your trained models.

In [23]:
# helper function for viewing a list of passed in sample images
def view_samples(epoch, samples):
    fig, axes = plt.subplots(figsize=(16,4), nrows=2, ncols=8, sharey=True, sharex=True)
    for ax, img in zip(axes.flatten(), samples[epoch]):
        img = img.detach().cpu().numpy()
        img = np.transpose(img, (1, 2, 0))
        img = ((img + 1)*255 / (2)).astype(np.uint8)
        ax.xaxis.set_visible(False)
        ax.yaxis.set_visible(False)
        im = ax.imshow(img.reshape((32,32,3)))
In [24]:
# Load samples from generator, taken while training
with open('train_samples.pkl', 'rb') as f:
    samples = pkl.load(f)
In [25]:
_ = view_samples(-1, samples)

Question: What do you notice about your generated samples and how might you improve this model?

When you answer this question, consider the following factors:

  • The dataset is biased; it is made of "celebrity" faces that are mostly white
  • Model size; larger models have the opportunity to learn more features in a data feature space
  • Optimization strategy; optimizers and number of epochs affect your final result

Answer:

The generated faces appear that one or two skin colors are mixed. So if we categorized dataset as per skin color, or sex. It seems that hair impact much on face, so we should consider it, also, when we categorize dataset of face.

Model size is good as output face picture size is small 32x32, unfortunately, most of face picture don’t have its chin, so I couldn’t see how chin impact on whole face.

Initially, I’ve started with 20 epoch, as I checked losses, it seems that it is good to do early stopping at around 7 epoch, to save training time.

Submitting This Project

When submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as "dlnd_face_generation.ipynb" and save it as a HTML file under "File" -> "Download as". Include the "problem_unittests.py" files in your submission.