美文网首页
深度学习笔记(十五)—— GAN-4

深度学习笔记(十五)—— GAN-4

作者: Nino_Lau | 来源:发表于2019-08-02 09:36 被阅读0次

    Image-image translation

    下面介绍一个使用CGAN来做Image-to-Image Translation的模型--pix2pix。

    import os
    import numpy as np
    import math
    import itertools
    import time
    import datetime
    import sys
    
    import torchvision
    import torchvision.transforms as transforms
    
    from torch.utils.data import DataLoader
    from torchvision import datasets
    
    
    import torch.nn as nn
    import torch.nn.functional as F
    import torch
    

    本次实验使用的是Facade数据集,由于数据集的特殊性,一张图片包括两部分,如下图,左半边为groundtruth,右半边为轮廓,我们需要重写数据集的读取类,下面这个cell是就是用来读取数据集。最终使得我们的模型可以从右边部分的轮廓生成左边的建筑.
    [图片上传失败...(image-c8afb0-1564709719083)]

    (可以跳过阅读)下面是dataset部分代码.

    import glob
    import random
    import os
    import numpy as np
    
    from torch.utils.data import Dataset
    from PIL import Image
    import torchvision.transforms as transforms
    
    
    class ImageDataset(Dataset):
        def __init__(self, root, transforms_=None, mode="train"):
            self.transform = transforms_
            # read image
            self.files = sorted(glob.glob(os.path.join(root, mode) + "/*.*"))
    
        def __getitem__(self, index):
            # crop image,the left half if groundtruth image, and the right half is outline of groundtruth.
            img = Image.open(self.files[index % len(self.files)])
            w, h = img.size
            img_B = img.crop((0, 0, w / 2, h))
            img_A = img.crop((w / 2, 0, w, h))
    
            if np.random.random() < 0.5:
                # revese the image by 50%
                img_A = Image.fromarray(np.array(img_A)[:, ::-1, :], "RGB")
                img_B = Image.fromarray(np.array(img_B)[:, ::-1, :], "RGB")
    
            img_A = self.transform(img_A)
            img_B = self.transform(img_B)
    
            return {"A": img_A, "B": img_B}
    
        def __len__(self):
            return len(self.files)
    

    生成网络G,一个Encoder-Decoder模型,借鉴了U-Net结构,所谓的U-Net是将第i层拼接到第n-i层,这样做是因为第i层和第n-i层的图像大小是一致的。
    判别网络D,Pix2Pix中的D被实现为Patch-D,所谓Patch,是指无论生成的图像有多大,将其切分为多个固定大小的Patch输入进D去判断。

    import torch.nn as nn
    import torch.nn.functional as F
    import torch
    
    
    ##############################
    #           U-NET
    ##############################
    
    
    class UNetDown(nn.Module):
        def __init__(self, in_size, out_size, normalize=True, dropout=0.0):
            super(UNetDown, self).__init__()
            layers = [nn.Conv2d(in_size, out_size, 4, 2, 1, bias=False)]
            if normalize:
                # when baych-size is 1, BN is replaced by instance normalization
                layers.append(nn.InstanceNorm2d(out_size))
            layers.append(nn.LeakyReLU(0.2))
            if dropout:
                layers.append(nn.Dropout(dropout))
            self.model = nn.Sequential(*layers)
    
        def forward(self, x):
            return self.model(x)
    
    
    class UNetUp(nn.Module):
        def __init__(self, in_size, out_size, dropout=0.0):
            super(UNetUp, self).__init__()
            layers = [
                nn.ConvTranspose2d(in_size, out_size, 4, 2, 1, bias=False),
                # when baych-size is 1, BN is replaced by instance normalization
                nn.InstanceNorm2d(out_size),
                nn.ReLU(inplace=True),
            ]
            if dropout:
                layers.append(nn.Dropout(dropout))
    
            self.model = nn.Sequential(*layers)
    
        def forward(self, x, skip_input):
            x = self.model(x)
            x = torch.cat((x, skip_input), 1)
    
            return x
    
    
    class GeneratorUNet(nn.Module):
        def __init__(self, in_channels=3, out_channels=3):
            super(GeneratorUNet, self).__init__()
    
            self.down1 = UNetDown(in_channels, 64, normalize=False)
            self.down2 = UNetDown(64, 128)
            self.down3 = UNetDown(128, 256)
            self.down4 = UNetDown(256, 256, dropout=0.5)
            self.down5 = UNetDown(256, 256, dropout=0.5)
            self.down6 = UNetDown(256, 256, normalize=False, dropout=0.5)
    
            self.up1 = UNetUp(256, 256, dropout=0.5)
            self.up2 = UNetUp(512, 256)
            self.up3 = UNetUp(512, 256)
            self.up4 = UNetUp(512, 128)
            self.up5 = UNetUp(256, 64)
    
            self.final = nn.Sequential(
                nn.Upsample(scale_factor=2),
                nn.ZeroPad2d((1, 0, 1, 0)),
                nn.Conv2d(128, out_channels, 4, padding=1),
                nn.Tanh(),
            )
    
        def forward(self, x):
            # U-Net generator with skip connections from encoder to decoder
            d1 = self.down1(x)# 32x32
            d2 = self.down2(d1)#16x16
            d3 = self.down3(d2)#8x8
            d4 = self.down4(d3)#4x4
            d5 = self.down5(d4)#2x2
            d6 = self.down6(d5)#1x1
            u1 = self.up1(d6, d5)#2x2
            u2 = self.up2(u1, d4)#4x4
            u3 = self.up3(u2, d3)#8x8
            u4 = self.up4(u3, d2)#16x16
            u5 = self.up5(u4, d1)#32x32
    
            return self.final(u5)#64x64
    
    
    ##############################
    #        Discriminator
    ##############################
    
    
    class Discriminator(nn.Module):
        def __init__(self, in_channels=3):
            super(Discriminator, self).__init__()
    
            def discriminator_block(in_filters, out_filters, normalization=True):
                """Returns downsampling layers of each discriminator block"""
                layers = [nn.Conv2d(in_filters, out_filters, 4, stride=2, padding=1)]
                if normalization:
                    # when baych-size is 1, BN is replaced by instance normalization
                    layers.append(nn.InstanceNorm2d(out_filters))
                layers.append(nn.LeakyReLU(0.2, inplace=True))
                return layers
    
            self.model = nn.Sequential(
                *discriminator_block(in_channels * 2, 64, normalization=False),#32x32
                *discriminator_block(64, 128),#16x16
                *discriminator_block(128, 256),#8x8
                *discriminator_block(256, 256),#4x4
                nn.ZeroPad2d((1, 0, 1, 0)),
                nn.Conv2d(256, 1, 4, padding=1, bias=False)#4x4
            )
    
        def forward(self, img_A, img_B):
            # Concatenate image and condition image by channels to produce input
            img_input = torch.cat((img_A, img_B), 1)
            return self.model(img_input)
    

    (可以跳过阅读)下面这个函数用来保存轮廓图,生成图片,groundtruth,以作对比。

    from utils import show
    def sample_images(dataloader, G, device):
        """Saves a generated sample from the validation set"""
        imgs = next(iter(dataloader))
        real_A = imgs["A"].to(device)
        real_B = imgs["B"].to(device)
        fake_B = G(real_A)
        img_sample = torch.cat((real_A.data, fake_B.data, real_B.data), -2)
        show(torchvision.utils.make_grid(img_sample.cpu().data, nrow=5, normalize=True))
    

    接着定义一些超参数lambda_pixel

    # hyper param
    n_epochs = 200
    batch_size = 2
    lr = 0.0002
    img_size = 64
    channels = 3
    device = torch.device('cuda:0')
    betas = (0.5, 0.999)
    # Loss weight of L1 pixel-wise loss between translated image and real image
    lambda_pixel = 1
    

    对于pix2pix的loss function,包括CGAN的loss,加上L1Loss,其中L1Loss之前有一个系数lambda,用于调节两者之间的权重。
    [图片上传失败...(image-16319b-1564709719083)]

    这里定义损失函数和优化器,这里损失函数使用了MSEloss作为GAN的loss(LSGAN).

    from utils import weights_init_normal
    # Loss functions
    criterion_GAN = torch.nn.MSELoss().to(device)
    criterion_pixelwise = torch.nn.L1Loss().to(device)
    
    # Calculate output of image discriminator (PatchGAN)
    patch = (1, img_size // 16, img_size // 16)
    
    # Initialize generator and discriminator
    G = GeneratorUNet().to(device)
    D = Discriminator().to(device)
    G.apply(weights_init_normal)
    D.apply(weights_init_normal)
    
    optimizer_G = torch.optim.Adam(G.parameters(), lr=lr, betas=betas)
    optimizer_D = torch.optim.Adam(D.parameters(), lr=lr, betas=betas)
    
    # Configure dataloaders
    transforms_ = transforms.Compose([
        transforms.Resize((img_size, img_size), Image.BICUBIC),
        transforms.ToTensor(),
        transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
    ])
    
    
    dataloader = DataLoader(
        ImageDataset("./data/facades", transforms_=transforms_),
        batch_size=batch_size,
        shuffle=True,
        num_workers=8,
    )
    
    val_dataloader = DataLoader(
        ImageDataset("./data/facades", transforms_=transforms_, mode="val"),
        batch_size=10,
        shuffle=True,
        num_workers=1,
    )
    

    下面开始训练pix2pix,训练的过程:

    1. 首先训练G,对于每张图片A(轮廓),用G生成fakeB(建筑),然后fakeB与realB(ground truth)计算L1loss,同时使用D判别(fakeB,A),计算MSEloss(label为1),用这2个loss一起更新G;
    2. 再训练D,使用(fakeB,A)与(realB,A)计算MSEloss(label前者为0,后者为1),更新D.
    for epoch in range(n_epochs):
        for i, batch in enumerate(dataloader):
    
            #  G:B -> A
            real_A = batch["A"].to(device)
            real_B = batch["B"].to(device)
    
            # Adversarial ground truths
            real_label = torch.ones((real_A.size(0), *patch)).to(device)
            fake_label = torch.zeros((real_A.size(0), *patch)).to(device)
    
            # ------------------
            #  Train Generators
            # ------------------
    
            optimizer_G.zero_grad()
    
            # GAN loss
            fake_B = G(real_A)
            pred_fake = D(fake_B, real_A)
            loss_GAN = criterion_GAN(pred_fake, real_label)
            # Pixel-wise loss
            loss_pixel = criterion_pixelwise(fake_B, real_B)
    
            # Total loss
            loss_G = loss_GAN + lambda_pixel * loss_pixel
    
            loss_G.backward()
    
            optimizer_G.step()
    
            # ---------------------
            #  Train Discriminator
            # ---------------------
    
            optimizer_D.zero_grad()
    
            # Real loss
            pred_real = D(real_B, real_A)
            loss_real = criterion_GAN(pred_real, real_label)
    
            # Fake loss
            pred_fake = D(fake_B.detach(), real_A)
            loss_fake = criterion_GAN(pred_fake, fake_label)
    
            # Total loss
            loss_D = 0.5 * (loss_real + loss_fake)
    
            loss_D.backward()
            optimizer_D.step()
    
        # Print log
        print(
            "\r[Epoch %d/%d] [Batch %d/%d] [D loss: %f] [G loss: %f, pixel: %f, adv: %f]"
            % (
                epoch,
                n_epochs,
                i,
                len(dataloader),
                loss_D.item(),
                loss_G.item(),
                loss_pixel.item(),
                loss_GAN.item(),
            )
        )
    
        # If at sample interval save image
        if epoch == 0 or (epoch + 1) % 5 == 0:
            sample_images(val_dataloader, G, device)
    
    /opt/conda/lib/python3.6/site-packages/torch/nn/modules/upsampling.py:129: UserWarning: nn.Upsample is deprecated. Use nn.functional.interpolate instead.
      warnings.warn("nn.{} is deprecated. Use nn.functional.interpolate instead.".format(self.name))
    
    
    [Epoch 0/200] [Batch 199/200] [D loss: 0.329559] [G loss: 0.837497, pixel: 0.370509, adv: 0.466988]
    
    image
    [Epoch 1/200] [Batch 199/200] [D loss: 0.187533] [G loss: 0.690237, pixel: 0.384734, adv: 0.305503]
    [Epoch 2/200] [Batch 199/200] [D loss: 0.192769] [G loss: 0.710474, pixel: 0.357925, adv: 0.352549]
    [Epoch 3/200] [Batch 199/200] [D loss: 0.257360] [G loss: 0.608871, pixel: 0.327612, adv: 0.281260]
    [Epoch 4/200] [Batch 199/200] [D loss: 0.147929] [G loss: 0.887955, pixel: 0.474433, adv: 0.413522]
    
    image
    [Epoch 5/200] [Batch 199/200] [D loss: 0.377922] [G loss: 0.743606, pixel: 0.492447, adv: 0.251159]
    [Epoch 6/200] [Batch 199/200] [D loss: 0.209727] [G loss: 0.689151, pixel: 0.384093, adv: 0.305057]
    [Epoch 7/200] [Batch 199/200] [D loss: 0.224705] [G loss: 1.000042, pixel: 0.639260, adv: 0.360782]
    [Epoch 8/200] [Batch 199/200] [D loss: 0.144029] [G loss: 1.020684, pixel: 0.503782, adv: 0.516902]
    [Epoch 9/200] [Batch 199/200] [D loss: 0.254280] [G loss: 0.809810, pixel: 0.416601, adv: 0.393209]
    
    image
    [Epoch 10/200] [Batch 199/200] [D loss: 0.243891] [G loss: 0.895190, pixel: 0.446443, adv: 0.448747]
    [Epoch 11/200] [Batch 199/200] [D loss: 0.210248] [G loss: 0.712496, pixel: 0.409450, adv: 0.303046]
    [Epoch 12/200] [Batch 199/200] [D loss: 0.178942] [G loss: 0.673143, pixel: 0.382591, adv: 0.290552]
    [Epoch 13/200] [Batch 199/200] [D loss: 0.116803] [G loss: 1.028422, pixel: 0.466901, adv: 0.561522]
    [Epoch 14/200] [Batch 199/200] [D loss: 0.236468] [G loss: 0.860611, pixel: 0.383249, adv: 0.477362]
    
    image
    [Epoch 15/200] [Batch 199/200] [D loss: 0.220974] [G loss: 0.686148, pixel: 0.446806, adv: 0.239342]
    [Epoch 16/200] [Batch 199/200] [D loss: 0.296042] [G loss: 1.235985, pixel: 0.507029, adv: 0.728956]
    [Epoch 17/200] [Batch 199/200] [D loss: 0.223143] [G loss: 0.806767, pixel: 0.373452, adv: 0.433314]
    [Epoch 18/200] [Batch 199/200] [D loss: 0.164129] [G loss: 1.060684, pixel: 0.519046, adv: 0.541638]
    [Epoch 19/200] [Batch 199/200] [D loss: 0.132792] [G loss: 1.019057, pixel: 0.431385, adv: 0.587671]
    
    image
    [Epoch 20/200] [Batch 199/200] [D loss: 0.210773] [G loss: 1.006550, pixel: 0.355910, adv: 0.650640]
    [Epoch 21/200] [Batch 199/200] [D loss: 0.197349] [G loss: 0.917636, pixel: 0.464914, adv: 0.452722]
    [Epoch 22/200] [Batch 199/200] [D loss: 0.315029] [G loss: 0.775995, pixel: 0.473480, adv: 0.302515]
    [Epoch 23/200] [Batch 199/200] [D loss: 0.215998] [G loss: 0.834821, pixel: 0.451176, adv: 0.383645]
    [Epoch 24/200] [Batch 199/200] [D loss: 0.139875] [G loss: 0.799897, pixel: 0.412393, adv: 0.387504]
    

    ......

    image
    [Epoch 190/200] [Batch 199/200] [D loss: 0.023207] [G loss: 1.274331, pixel: 0.346932, adv: 0.927399]
    [Epoch 191/200] [Batch 199/200] [D loss: 0.017008] [G loss: 1.253284, pixel: 0.341651, adv: 0.911634]
    [Epoch 192/200] [Batch 199/200] [D loss: 0.053124] [G loss: 0.952296, pixel: 0.424036, adv: 0.528260]
    [Epoch 193/200] [Batch 199/200] [D loss: 0.028502] [G loss: 1.004215, pixel: 0.253622, adv: 0.750593]
    [Epoch 194/200] [Batch 199/200] [D loss: 0.021864] [G loss: 1.111022, pixel: 0.212485, adv: 0.898536]
    
    image
    [Epoch 195/200] [Batch 199/200] [D loss: 0.029861] [G loss: 1.303421, pixel: 0.308407, adv: 0.995015]
    [Epoch 196/200] [Batch 199/200] [D loss: 0.014942] [G loss: 1.487336, pixel: 0.484178, adv: 1.003158]
    [Epoch 197/200] [Batch 199/200] [D loss: 0.037478] [G loss: 1.205319, pixel: 0.364015, adv: 0.841304]
    [Epoch 198/200] [Batch 199/200] [D loss: 0.046917] [G loss: 0.954051, pixel: 0.309143, adv: 0.644907]
    [Epoch 199/200] [Batch 199/200] [D loss: 0.026501] [G loss: 1.166066, pixel: 0.400264, adv: 0.765802]
    
    image

    作业

    1. 只用L1 Loss的情况下训练pix2pix.说说有结果什么不同.

    答:生成图片比较模糊,轮廓没有之前的清晰,建筑物的细节也少了很多,颜色也比较单一。

    for epoch in range(n_epochs):
        for i, batch in enumerate(dataloader):
    
            #  G:B -> A
            real_A = batch["A"].to(device)
            real_B = batch["B"].to(device)
    
            # ------------------
            #  Train Generators
            # ------------------
    
            optimizer_G.zero_grad()
    
            # GAN loss
            fake_B = G(real_A)
            # Pixel-wise loss
            loss_pixel = criterion_pixelwise(fake_B, real_B)
    
            # Total loss
            loss_G = loss_pixel
    
            loss_G.backward()
    
            optimizer_G.step()
    
    
        # Print log
        print(
            "\r[Epoch %d/%d] [Batch %d/%d] [G loss: %f]"
            % (
                epoch,
                n_epochs,
                i,
                len(dataloader),
                loss_G.item()
            )
        )
    
        # If at sample interval save image
        if epoch == 0 or (epoch + 1) % 5 == 0:
            sample_images(val_dataloader, G, device)
    
    /opt/conda/lib/python3.6/site-packages/torch/nn/modules/upsampling.py:129: UserWarning: nn.Upsample is deprecated. Use nn.functional.interpolate instead.
      warnings.warn("nn.{} is deprecated. Use nn.functional.interpolate instead.".format(self.name))
    
    
    [Epoch 0/200] [Batch 199/200] [G loss: 0.277459]
    
    image
    [Epoch 1/200] [Batch 199/200] [G loss: 0.280747]
    [Epoch 2/200] [Batch 199/200] [G loss: 0.302678]
    [Epoch 3/200] [Batch 199/200] [G loss: 0.277463]
    [Epoch 4/200] [Batch 199/200] [G loss: 0.305907]
    
    image
    [Epoch 5/200] [Batch 199/200] [G loss: 0.361403]
    [Epoch 6/200] [Batch 199/200] [G loss: 0.262338]
    [Epoch 7/200] [Batch 199/200] [G loss: 0.242269]
    [Epoch 8/200] [Batch 199/200] [G loss: 0.269101]
    [Epoch 9/200] [Batch 199/200] [G loss: 0.282228]
    
    image
    [Epoch 10/200] [Batch 199/200] [G loss: 0.314959]
    [Epoch 11/200] [Batch 199/200] [G loss: 0.264300]
    [Epoch 12/200] [Batch 199/200] [G loss: 0.317328]
    [Epoch 13/200] [Batch 199/200] [G loss: 0.288205]
    [Epoch 14/200] [Batch 199/200] [G loss: 0.268344]
    
    image
    [Epoch 15/200] [Batch 199/200] [G loss: 0.270621]
    [Epoch 16/200] [Batch 199/200] [G loss: 0.260496]
    [Epoch 17/200] [Batch 199/200] [G loss: 0.295739]
    [Epoch 18/200] [Batch 199/200] [G loss: 0.172208]
    [Epoch 19/200] [Batch 199/200] [G loss: 0.208443]
    
    image
    [Epoch 20/200] [Batch 199/200] [G loss: 0.199149]
    [Epoch 21/200] [Batch 199/200] [G loss: 0.252810]
    [Epoch 22/200] [Batch 199/200] [G loss: 0.249091]
    [Epoch 23/200] [Batch 199/200] [G loss: 0.215632]
    [Epoch 24/200] [Batch 199/200] [G loss: 0.243048]
    

    ......

    image
    [Epoch 190/200] [Batch 199/200] [G loss: 0.116934]
    [Epoch 191/200] [Batch 199/200] [G loss: 0.102533]
    [Epoch 192/200] [Batch 199/200] [G loss: 0.099125]
    [Epoch 193/200] [Batch 199/200] [G loss: 0.092987]
    [Epoch 194/200] [Batch 199/200] [G loss: 0.102197]
    
    image
    [Epoch 195/200] [Batch 199/200] [G loss: 0.084951]
    [Epoch 196/200] [Batch 199/200] [G loss: 0.109301]
    [Epoch 197/200] [Batch 199/200] [G loss: 0.096025]
    [Epoch 198/200] [Batch 199/200] [G loss: 0.106507]
    [Epoch 199/200] [Batch 199/200] [G loss: 0.096771]
    
    image
    1. 只用CGAN Loss训练pix2pix(在下面的cell填入对应代码并运行).说说有结果什么不同.

    答:生成图片比只用L1 Loss的清晰不少,很接近两者都用时的清晰度,但是颜色还是没有两者都用时的丰富和贴近现实。

    for epoch in range(n_epochs):
        for i, batch in enumerate(dataloader):
            
            #  G:B -> A
            real_A = batch["A"].to(device)
            real_B = batch["B"].to(device)
    
            # Adversarial ground truths
            real_label = torch.ones((real_A.size(0), *patch)).to(device)
            fake_label = torch.zeros((real_A.size(0), *patch)).to(device)
    
            # ------------------
            #  Train Generators
            # ------------------
    
            optimizer_G.zero_grad()
    
            # GAN loss
            fake_B = G(real_A)
            pred_fake = D(fake_B, real_A)
            loss_GAN = criterion_GAN(pred_fake, real_label)
            
            # Total loss
            loss_G = loss_GAN
    
            loss_G.backward()
    
            optimizer_G.step()        
    
            # ---------------------
            #  Train Discriminator
            # ---------------------
    
            optimizer_D.zero_grad()
    
            # Real loss
            pred_real = D(real_B, real_A)
            loss_real = criterion_GAN(pred_real, real_label)
    
            # Fake loss
            pred_fake = D(fake_B.detach(), real_A)
            loss_fake = criterion_GAN(pred_fake, fake_label)
    
            # Total loss
            loss_D = 0.5 * (loss_real + loss_fake)
    
            loss_D.backward()
            optimizer_D.step()
            
        # Print log
        print(
            "\r[Epoch %d/%d] [Batch %d/%d] [D loss: %f] [G loss: %f]"
            % (
                epoch,
                n_epochs,
                i,
                len(dataloader),
                loss_D.item(),
                loss_G.item()
            )
        )
    
        # If at sample interval save image
        if epoch == 0 or (epoch + 1) % 5 == 0:
            sample_images(val_dataloader, G, device)
    
    [Epoch 0/200] [Batch 199/200] [D loss: 0.254110] [G loss: 0.400683]
    
    image
    [Epoch 1/200] [Batch 199/200] [D loss: 0.342961] [G loss: 0.359540]
    [Epoch 2/200] [Batch 199/200] [D loss: 0.385767] [G loss: 0.340707]
    [Epoch 3/200] [Batch 199/200] [D loss: 0.326501] [G loss: 0.188578]
    [Epoch 4/200] [Batch 199/200] [D loss: 0.102095] [G loss: 0.752533]
    
    image
    [Epoch 5/200] [Batch 199/200] [D loss: 0.122336] [G loss: 0.466870]
    [Epoch 6/200] [Batch 199/200] [D loss: 0.242705] [G loss: 0.309910]
    [Epoch 7/200] [Batch 199/200] [D loss: 0.305823] [G loss: 0.485629]
    [Epoch 8/200] [Batch 199/200] [D loss: 0.213961] [G loss: 0.397673]
    [Epoch 9/200] [Batch 199/200] [D loss: 0.344844] [G loss: 0.432747]
    
    image
    [Epoch 10/200] [Batch 199/200] [D loss: 0.184985] [G loss: 0.340543]
    [Epoch 11/200] [Batch 199/200] [D loss: 0.132156] [G loss: 0.509994]
    [Epoch 12/200] [Batch 199/200] [D loss: 0.197557] [G loss: 0.312057]
    [Epoch 13/200] [Batch 199/200] [D loss: 0.224186] [G loss: 0.181903]
    [Epoch 14/200] [Batch 199/200] [D loss: 0.099012] [G loss: 0.685086]
    
    image
    [Epoch 15/200] [Batch 199/200] [D loss: 0.225293] [G loss: 0.548334]
    [Epoch 16/200] [Batch 199/200] [D loss: 0.286089] [G loss: 0.723756]
    [Epoch 17/200] [Batch 199/200] [D loss: 0.291427] [G loss: 0.749057]
    [Epoch 18/200] [Batch 199/200] [D loss: 0.120387] [G loss: 0.559266]
    [Epoch 19/200] [Batch 199/200] [D loss: 0.078057] [G loss: 0.525624]
    
    image
    [Epoch 20/200] [Batch 199/200] [D loss: 0.241737] [G loss: 0.502774]
    [Epoch 21/200] [Batch 199/200] [D loss: 0.096059] [G loss: 0.497557]
    [Epoch 22/200] [Batch 199/200] [D loss: 0.309472] [G loss: 0.092431]
    [Epoch 23/200] [Batch 199/200] [D loss: 0.186846] [G loss: 0.214923]
    [Epoch 24/200] [Batch 199/200] [D loss: 0.173517] [G loss: 0.269690]
    

    ......

    image
    [Epoch 190/200] [Batch 199/200] [D loss: 0.009696] [G loss: 1.126480]
    [Epoch 191/200] [Batch 199/200] [D loss: 0.007405] [G loss: 1.073396]
    [Epoch 192/200] [Batch 199/200] [D loss: 0.039247] [G loss: 0.804929]
    [Epoch 193/200] [Batch 199/200] [D loss: 0.027823] [G loss: 1.100355]
    [Epoch 194/200] [Batch 199/200] [D loss: 0.020142] [G loss: 0.842804]
    
    image
    [Epoch 195/200] [Batch 199/200] [D loss: 0.008569] [G loss: 0.983230]
    [Epoch 196/200] [Batch 199/200] [D loss: 0.013945] [G loss: 0.900784]
    [Epoch 197/200] [Batch 199/200] [D loss: 0.021424] [G loss: 0.746807]
    [Epoch 198/200] [Batch 199/200] [D loss: 0.030270] [G loss: 1.077675]
    [Epoch 199/200] [Batch 199/200] [D loss: 0.013651] [G loss: 0.832313]
    
    image

    相关文章

      网友评论

          本文标题:深度学习笔记(十五)—— GAN-4

          本文链接:https://www.haomeiwen.com/subject/lfjkdctx.html