美文网首页
Intro to Convolutional Networks

Intro to Convolutional Networks

作者: Kulbear | 来源:发表于2017-03-15 20:29 被阅读0次

    Content Copyrights Udacity DLND Group

    Feature Map Sizes

    Feature Map SizesFeature Map Sizes

    Convolutions

    ConvNetConvNet

    Convolution Output Shape

    Given our input layer has a volume of W, our filter has a volume (height * width * depth) of F, we have a stride of S, and a padding of P, the following formula gives us the volume of the next layer: (W−F+2P)/S+1.

    Setup

    H = height, W = width, D = depth

    • An input of shape 32x32x3 (HxWxD)
    • 20 filters of shape 8x8x3 (HxWxD)
    • A stride of 2 for both the height and width (S)
    • Valid padding of size 1 (P)
    new_height = (input_height - filter_height + 2 * P)/S + 1
    new_width = (input_width - filter_width + 2 * P)/S + 1
    

    What's the shape of the output? (Convolutional Layer Output Shape)

    The answer is 14x14x20.

    We can get the new height and width with the formula resulting in:

    (32 - 8 + 2 * 1)/2 + 1 = 14
    (32 - 8 + 2 * 1)/2 + 1 = 14
    

    The new depth is equal to the number of filters, which is 20.

    This would correspond to the following code:

    input = tf.placeholder(tf.float32, (None, 32, 32, 3))
    filter_weights = tf.Variable(tf.truncated_normal((8, 8, 3, 20))) # (height, width, input_depth, output_depth)
    filter_bias = tf.Variable(tf.zeros(20))
    strides = [1, 2, 2, 1] # (batch, height, width, depth)
    padding = 'VALID'
    conv = tf.nn.conv2d(input, filter_weights, strides, padding) + filter_bias
    

    Parameter Sharing

    Setup

    Being able to calculate the number of parameters in a neural network is useful since we want to have control over how much memory a neural network uses.

    • An input of shape 32x32x3 (HxWxD)
    • 20 filters of shape 8x8x3 (HxWxD)
    • A stride of 2 for both the height and width (S)
    • Valid padding of size 1 (P)
    • An output of shape 14x14x20 (HxWxD)

    Without parameter sharing, each neuron in the output layer must connect to each neuron in the filter. In addition, each neuron in the output layer must also connect to a single bias neuron.

    There are 756560 total parameters. That's a HUGE amount! Here's how we calculate it:

    (8 * 8 * 3 + 1) * (14 * 14 * 20) = 756560
    

    8 * 8 * 3 is the number of weights, we add 1 for the bias. Remember, each weight is assigned to every single part of the output (14 * 14 * 20). So we multiply these two numbers together and we get the final answer.

    With parameter sharing, each neuron in an output channel shares its weights with every other neuron in that channel. So the number of parameters is equal to the number of neurons in the filter, plus a bias neuron, all multiplied by the number of channels in the output layer.

    There are 3860 total parameters. That's 196 times fewer parameters! Here's how the answer is calculated:

    (8 * 8 * 3 + 1) * 20 = 3840 + 20 = 3860
    

    That's 3840 weights and 20 biases. This should look similar to the answer from the previous quiz. The difference being it's just 20 instead of (14 * 14 * 20). Remember, with weight sharing we use the same filter for an entire depth slice. Because of this we can get rid of 14 * 14 and be left with only 20.

    Visualizing CNNs

    The CNN we will look at is trained on ImageNet as described in this paper by Zeiler and Fergus. In the images below (from the same paper), we'll see what each layer in this network detects and see how each layer detects more and more complex ideas.

    Layer 1

    Example patterns that cause activations in the first layer of the network. These range from simple diagonal lines (top left) to green blobs (bottom middle).Example patterns that cause activations in the first layer of the network. These range from simple diagonal lines (top left) to green blobs (bottom middle).

    Each image in the above grid represents a pattern that causes the neurons in the first layer to activate - in other words, they are patterns that the first layer recognizes. The top left image shows a -45 degree line, while the middle top square shows a +45 degree line.

    Let's now see some example images that cause such activations. The below grid of images all activated the -45 degree line. Notice how they are all selected despite the fact that they have different colors, gradients, and patterns.

    Example patches that activate the -45 degree line detector in the first layer.Example patches that activate the -45 degree line detector in the first layer.

    Layer 2

    A visualization of the second layer in the CNN. Notice how we are picking up more complex ideas like circles and stripes. The gray grid on the left represents how this layer of the CNN activates (or "what it sees") based on the corresponding images from the grid on the right.A visualization of the second layer in the CNN. Notice how we are picking up more complex ideas like circles and stripes. The gray grid on the left represents how this layer of the CNN activates (or "what it sees") based on the corresponding images from the grid on the right.

    The second layer of the CNN captures complex ideas.

    As you see in the image above, the second layer of the CNN recognizes circles (second row, second column), stripes (first row, second column), and rectangles (bottom right).

    The CNN learns to do this on its own. There is no special instruction for the CNN to focus on more complex objects in deeper layers. That's just how it normally works out when you feed training data into a CNN.

    Layer 3

    A visualization of the third layer in the CNN. The gray grid on the left represents how this layer of the CNN activates (or "what it sees") based on the corresponding images from the grid on the right.A visualization of the third layer in the CNN. The gray grid on the left represents how this layer of the CNN activates (or "what it sees") based on the corresponding images from the grid on the right.

    The third layer picks out complex combinations of features from the second layer. These include things like grids, and honeycombs (top left), wheels (second row, second column), and even faces (third row, third column).

    Layer 5

    A visualization of the fifth and final layer of the CNN. The gray grid on the left represents how this layer of the CNN activates (or "what it sees") based on the corresponding images from the grid on the right.A visualization of the fifth and final layer of the CNN. The gray grid on the left represents how this layer of the CNN activates (or "what it sees") based on the corresponding images from the grid on the right.

    We'll skip layer 4, which continues this progression, and jump right to the fifth and final layer of this CNN.

    The last layer picks out the highest order ideas that we care about for classification, like dog faces, bird faces, and bicycles.

    TensorFlow Convolution Layer

    Let's examine how to implement a CNN in TensorFlow.

    TensorFlow provides the tf.nn.conv2d() and tf.nn.bias_add() functions to create your own convolutional layers.

    # Output depth
    k_output = 64
    
    # Image Properties
    image_width = 10
    image_height = 10
    color_channels = 3
    
    # Convolution filter
    filter_size_width = 5
    filter_size_height = 5
    
    # Input/Image
    input = tf.placeholder(
        tf.float32,
        shape=[None, image_height, image_width, color_channels])
    
    # Weight and bias
    weight = tf.Variable(tf.truncated_normal(
        [filter_size_height, filter_size_width, color_channels, k_output]))
    bias = tf.Variable(tf.zeros(k_output))
    
    # Apply Convolution
    conv_layer = tf.nn.conv2d(input, weight, strides=[1, 2, 2, 1], padding='SAME')
    # Add bias
    conv_layer = tf.nn.bias_add(conv_layer, bias)
    # Apply activation function
    conv_layer = tf.nn.relu(conv_layer)
    

    The code above uses the tf.nn.conv2d() function to compute the convolution with weight as the filter and [1, 2, 2, 1] for the strides. TensorFlow uses a stride for each input dimension, [batch, input_height, input_width, input_channels]. We are generally always going to set the stride for batch and input_channels (i.e. the first and fourth element in the strides array) to be 1.

    You'll focus on changing input_height and input_width while setting batch and input_channels to 1. The input_height and input_width strides are for striding the filter over input. This example code uses a stride of 2 with 5x5 filter over input.

    The tf.nn.bias_add() function adds a 1-d bias to the last dimension in a matrix.

    TensorFlow Max Pooling

    , via Wikimedia Commons]](https://cloud.githubusercontent.com/assets/14886380/23986456/2b112df8-0a5f-11e7-964a-c5f004a9ba50.png)

    The image above is an example of max pooling with a 2x2 filter and stride of 2. The four 2x2 colors represent each time the filter was applied to find the maximum value.

    For example, [[1, 0], [4, 6]] becomes 6, because 6 is the maximum value in this set. Similarly, [[2, 3], [6, 8]] becomes 8.

    Conceptually, the benefit of the max pooling operation is to reduce the size of the input, and allow the neural network to focus on only the most important elements. Max pooling does this by only retaining the maximum value for each filtered area, and removing the remaining values.

    TensorFlow provides the tf.nn.max_pool() function to apply max pooling to your convolutional layers.

    ...
    conv_layer = tf.nn.conv2d(input, weight, strides=[1, 2, 2, 1], padding='SAME')
    conv_layer = tf.nn.bias_add(conv_layer, bias)
    conv_layer = tf.nn.relu(conv_layer)
    # Apply Max Pooling
    conv_layer = tf.nn.max_pool(
        conv_layer,
        ksize=[1, 2, 2, 1],
        strides=[1, 2, 2, 1],
        padding='SAME')
    

    The tf.nn.max_pool() function performs max pooling with the ksize parameter as the size of the filter and the strides parameter as the length of the stride. 2x2 filters with a stride of 2x2 are common in practice.

    The ksize and strides parameters are structured as 4-element lists, with each element corresponding to a dimension of the input tensor ([batch, height, width, channels]). For both ksize and strides, the batch and channel dimensions are typically set to 1.

    A pooling layer is generally used to decrease the size of the output and prevent overfitting. Reducing overfitting is a consequence of the reducing the output size, which in turn, reduces the number of parameters in future layers.

    Recently, pooling layers have fallen out of favor. Some reasons are:

    • Recent datasets are so big and complex we're more concerned about underfitting.
    • Dropout is a much better regularizer.
    • Pooling results in a loss of information. Think about the max pooling operation as an example. We only keep the largest of n numbers, thereby disregarding n-1 numbers completely.

    Pooling Mechanics

    There are many pooling methods, and the one we will dig this time is the max pooling.

    Setup

    H = height, W = width, D = depth

    • We have an input of shape 4x4x5 (HxWxD)
    • Filter of shape 2x2 (HxW)
    • A stride of 2 for both the height and width (S)
    new_height = (input_height - filter_height)/S + 1
    new_width = (input_width - filter_width)/S + 1
    

    For a pooling layer the output depth is the same as the input depth. Additionally, the pooling operation is applied individually for each depth slice.

    The image below gives an example of how a max pooling layer works. In this case, the max pooling filter has a shape of 2x2. As the max pooling filter slides across the input layer, the filter will output the maximum value of the 2x2 square.

    The answer is 2x2x5. Here's how it's calculated using the formula:

    (4 - 2)/2 + 1 = 2
    (4 - 2)/2 + 1 = 2
    

    The depth stays the same.

    Here's the corresponding code:

    input = tf.placeholder(tf.float32, (None, 4, 4, 5))
    filter_shape = [1, 2, 2, 1]
    strides = [1, 2, 2, 1]
    padding = 'VALID'
    pool = tf.nn.max_pool(input, filter_shape, strides, padding)
    

    The output shape of pooling will be [1, 2, 2, 5], even if padding is changed to 'SAME'.

    1 x 1 Convolutions

    1x1 Convolutions1x1 Convolutions

    Convolutional Network in TensorFlow

    Dataset

    Here we're importing the MNIST dataset and using a convenient TensorFlow function to batch, scale, and One-Hot encode the data.

    from tensorflow.examples.tutorials.mnist import input_data
    mnist = input_data.read_data_sets(".", one_hot=True, reshape=False)
    
    import tensorflow as tf
    
    # Parameters
    learning_rate = 0.00001
    epochs = 10
    batch_size = 128
    
    # Number of samples to calculate validation and accuracy
    # Decrease this if you're running out of memory to calculate accuracy
    test_valid_size = 256
    
    # Network Parameters
    n_classes = 10  # MNIST total classes (0-9 digits)
    dropout = 0.75  # Dropout, probability to keep units
    

    Weights and Biases

    # Store layers weight & bias
    weights = {
        'wc1': tf.Variable(tf.random_normal([5, 5, 1, 32])),
        'wc2': tf.Variable(tf.random_normal([5, 5, 32, 64])),
        'wd1': tf.Variable(tf.random_normal([7*7*64, 1024])),
        'out': tf.Variable(tf.random_normal([1024, n_classes]))}
    
    biases = {
        'bc1': tf.Variable(tf.random_normal([32])),
        'bc2': tf.Variable(tf.random_normal([64])),
        'bd1': tf.Variable(tf.random_normal([1024])),
        'out': tf.Variable(tf.random_normal([n_classes]))}
    

    Convolutions

    Convolution with 3×3 Filter. Source: http://deeplearning.stanford.edu/wiki/index.php/Feature_extraction_using_convolutionConvolution with 3×3 Filter. Source: http://deeplearning.stanford.edu/wiki/index.php/Feature_extraction_using_convolution

    The above is an example of a convolution with a 3x3 filter and a stride of 1 being applied to data with a range of 0 to 1. The convolution for each 3x3 section is calculated against the weight, [[1, 0, 1], [0, 1, 0], [1, 0, 1]], then a bias is added to create the convolved feature on the right. In this case, the bias is zero. In TensorFlow, this is all done using tf.nn.conv2d() and tf.nn.bias_add().

    def conv2d(x, W, b, strides=1):
        x = tf.nn.conv2d(x, W, strides=[1, strides, strides, 1], padding='SAME')
        x = tf.nn.bias_add(x, b)
        return tf.nn.relu(x)
    

    The tf.nn.conv2d() function computes the convolution against weight W as shown above.

    In TensorFlow, strides is an array of 4 elements; the first element in this array indicates the stride for batch and last element indicates stride for features. It's good practice to remove the batches or features you want to skip from the data set rather than use a stride to skip them. You can always set the first and last element to 1 in strides in order to use all batches and features.

    The middle two elements are the strides for height and width respectively. I've mentioned stride as one number because you usually have a square stride where height = width. When someone says they are using a stride of 3, they usually mean tf.nn.conv2d(x, W, strides=[1, 3, 3, 1]).

    To make life easier, the code is using tf.nn.bias_add() to add the bias. Using tf.add() doesn't work when the tensors aren't the same shape.

    Max Pooling

    Max Pooling with 2x2 filter and stride of 2. Source: http://cs231n.github.io/convolutional-networks/Max Pooling with 2x2 filter and stride of 2. Source: http://cs231n.github.io/convolutional-networks/

    The above is an example of max pooling with a 2x2 filter and stride of 2. The left square is the input and the right square is the output. The four 2x2 colors in input represents each time the filter was applied to create the max on the right side. For example, [[1, 1], [5, 6]] becomes 6 and [[3, 2], [1, 2]] becomes 3.

    def maxpool2d(x, k=2):
        return tf.nn.max_pool(
            x,
            ksize=[1, k, k, 1],
            strides=[1, k, k, 1],
            padding='SAME')
    

    The tf.nn.max_pool() function does exactly what you would expect, it performs max pooling with the ksize parameter as the size of the filter.

    Model

    In the code below, we're creating 3 layers alternating between convolutions and max pooling followed by a fully connected and output layer. The transformation of each layer to new dimensions are shown in the comments. For example, the first layer shapes the images from 28x28x1 to 28x28x32 in the convolution step. Then next step applies max pooling, turning each sample into 14x14x32. All the layers are applied from conv1 to output, producing 10 class predictions.

    def conv_net(x, weights, biases, dropout):
        # Layer 1 - 28*28*1 to 14*14*32
        conv1 = conv2d(x, weights['wc1'], biases['bc1'])
        conv1 = maxpool2d(conv1, k=2)
    
        # Layer 2 - 14*14*32 to 7*7*64
        conv2 = conv2d(conv1, weights['wc2'], biases['bc2'])
        conv2 = maxpool2d(conv2, k=2)
    
        # Fully connected layer - 7*7*64 to 1024
        fc1 = tf.reshape(conv2, [-1, weights['wd1'].get_shape().as_list()[0]])
        fc1 = tf.add(tf.matmul(fc1, weights['wd1']), biases['bd1'])
        fc1 = tf.nn.relu(fc1)
        fc1 = tf.nn.dropout(fc1, dropout)
    
        # Output Layer - class prediction - 1024 to 10
        out = tf.add(tf.matmul(fc1, weights['out']), biases['out'])
        return out
    

    Session

    # tf Graph input
    x = tf.placeholder(tf.float32, [None, 28, 28, 1])
    y = tf.placeholder(tf.float32, [None, n_classes])
    keep_prob = tf.placeholder(tf.float32)
    
    # Model
    logits = conv_net(x, weights, biases, keep_prob)
    
    # Define loss and optimizer
    cost = tf.reduce_mean(\
        tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
    optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)\
        .minimize(cost)
    
    # Accuracy
    correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
    accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
    
    # Initializing the variables
    init = tf. global_variables_initializer()
    
    # Launch the graph
    with tf.Session() as sess:
        sess.run(init)
    
        for epoch in range(epochs):
            for batch in range(mnist.train.num_examples//batch_size):
                batch_x, batch_y = mnist.train.next_batch(batch_size)
                sess.run(optimizer, feed_dict={
                    x: batch_x,
                    y: batch_y,
                    keep_prob: dropout})
    
                # Calculate batch loss and accuracy
                loss = sess.run(cost, feed_dict={
                    x: batch_x,
                    y: batch_y,
                    keep_prob: 1.})
                valid_acc = sess.run(accuracy, feed_dict={
                    x: mnist.validation.images[:test_valid_size],
                    y: mnist.validation.labels[:test_valid_size],
                    keep_prob: 1.})
    
                print('Epoch {:>2}, Batch {:>3} -'
                      'Loss: {:>10.4f} Validation Accuracy: {:.6f}'.format(
                    epoch + 1,
                    batch + 1,
                    loss,
                    valid_acc))
    
        # Calculate Test Accuracy
        test_acc = sess.run(accuracy, feed_dict={
            x: mnist.test.images[:test_valid_size],
            y: mnist.test.labels[:test_valid_size],
            keep_prob: 1.})
        print('Testing Accuracy: {}'.format(test_acc))
    

    That's it! That is a CNN in TensorFlow. Now that you've seen a CNN in TensorFlow, let's see if you can apply it on your own!

    相关文章

      网友评论

          本文标题:Intro to Convolutional Networks

          本文链接:https://www.haomeiwen.com/subject/uqsrnttx.html