美文网首页人工智能/模式识别/机器学习精华专题
UFLDL新版教程与编程练习(七):Convolution an

UFLDL新版教程与编程练习(七):Convolution an

作者: 赖子啊 | 来源:发表于2019-08-10 21:49 被阅读44次

UFLDL是吴恩达团队编写的较早的一门深度学习入门,里面理论加上练习的节奏非常好,每次都想快点看完理论去动手编写练习,因为他帮你打好了整个代码框架,也有详细的注释,所以我们只要实现一点核心的代码编写工作就行了,上手快!

我这里找不到新版对应这块的中文翻译了,-_-,趁早写一下,否则又没感觉了!
第七节是:Convolution and Pooling(卷积和池化)

卷积(Convolution)

之前的多层卷积网络是Fully Connected Networks,而卷积神经网络是Locally Connected Networks,现在CNN这么火,想必提到卷积大家都会想到类似这种的图吧:
[图片上传失败...(image-cb3ff4-1565444895291)]
实际上,数学中离散变量的二维卷积是这样的;
C(j, k)=\sum_{p} \sum_{q} A(p, q) B(j-p+1, k-q+1)
而我们可以利用matlab里面的conv2函数快捷地实现二维卷积操作(注意要先翻转W 180°),通过卷积我们就可以让一张r \times c大图片x_{\text {large}},用小的a \times b卷积核\boldsymbol{x}_{\text { small }}滑过,就可以得到大小为(r-a+1) \times(c-b+1)的特征图了,下面就是我的cnnConvolve.m代码,其中还有一段利用GPU运算的,被我注释掉了

function convolvedFeatures = cnnConvolve(filterDim, numFilters, images, W, b)
% convolvedFeatures = cnnConvolve(filterDim, numFilters, convImages,   W,     b);
% in cnnExercise.m                   8          100       28*28*8   8*8*100 100*100
%cnnConvolve Returns the convolution of the features given by W and b with
%the given images
%
% Parameters:
%  filterDim - filter (feature) dimension
%  numFilters - number of feature maps
%  images - large images to convolve with, matrix in the form
%           images(r, c, image number)
%  W, b - W, b for features from the sparse autoencoder
%         W is of shape (filterDim,filterDim,numFilters)
%         b is of shape (numFilters,1)
%
% Returns:
%  convolvedFeatures - matrix of convolved features in the form
%                      convolvedFeatures(imageRow, imageCol, featureNum, imageNum)

numImages = size(images, 3);
imageDim = size(images, 1);  % 方阵
convDim = imageDim - filterDim + 1; % 28 - 8 + 1 = 21

convolvedFeatures = zeros(convDim, convDim, numFilters, numImages);

% Instructions:
%   Convolve every filter with every image here to produce the 
%   (imageDim - filterDim + 1) x (imageDim - filterDim + 1) x numFeatures x numImages
%   matrix convolvedFeatures, such that 
%   convolvedFeatures(imageRow, imageCol, featureNum, imageNum) is the
%   value of the convolved featureNum feature for the imageNum image over
%   the region (imageRow, imageCol) to (imageRow + filterDim - 1, imageCol + filterDim - 1)
%
% Expected running times: 
%   Convolving with 100 images should take less than 30 seconds 
%   Convolving with 5000 images should take around 2 minutes
%   (So to save time when testing, you should convolve with less images, as
%   described earlier)


for imageNum = 1:numImages
  for filterNum = 1:numFilters

    % convolution of image with feature matrix
    convolvedImage = zeros(convDim, convDim);

    % Obtain the feature (filterDim x filterDim) needed during the convolution

    %%% YOUR CODE HERE %%%
    filter = squeeze(W(:,:,filterNum));
    % Flip the feature matrix because of the definition of convolution, as explained later
    filter = rot90(squeeze(filter),2);  % squeeze 删除单一维度 二维数组不受 squeeze 的影响
      
    % Obtain the image
    im = squeeze(images(:, :, imageNum));

    % Convolve "filter" with "im", adding the result to convolvedImage
    % be sure to do a 'valid' convolution

    %%% YOUR CODE HERE %%%
    convolvedImage = conv2(im,filter,'valid'); % 21*21
    % Add the bias unit
    % Then, apply the sigmoid function to get the hidden activation
    
    %%% YOUR CODE HERE %%%
    convolvedImage = convolvedImage + b(filterNum);
    convolvedImage = sigmoid(convolvedImage);
    
    convolvedFeatures(:, :, filterNum, imageNum) = convolvedImage;
  end
end

%%%%%%%%%%%%%%%%%%% use gpu(can comment) %%%%%%%%%%%%%
% for imageNum = 1:numImages
%   for filterNum = 1:numFilters
% 
%     % convolution of image with feature matrix
%     convolvedImage = zeros(convDim, convDim);
%     gpu_convolvedImage = gpuArray(convolvedImage);
% 
%     % Obtain the feature (filterDim x filterDim) needed during the convolution
% 
%     %%% YOUR CODE HERE %%%
%     filter = squeeze(W(:,:,filterNum));
%     % Flip the feature matrix because of the definition of convolution, as explained later
%     filter = rot90(squeeze(filter),2);  % squeeze 删除单一维度 二维数组不受 squeeze 的影响
%       
%     % Obtain the image
%     im = squeeze(images(:, :, imageNum));
% 
%     % Convolve "filter" with "im", adding the result to convolvedImage
%     % be sure to do a 'valid' convolution
% 
%     %%% YOUR CODE HERE %%%
%     gpu_filter = gpuArray(filter);
%     gpu_im = gpuArray(im);
%     gpu_convolvedImage = conv2(gpu_im,gpu_filter,'valid');
%     % Add the bias unit
%     % Then, apply the sigmoid function to get the hidden activation
%     
%     %%% YOUR CODE HERE %%%
%     convolvedImage = gpu_convolvedImage + b(filterNum);
%     convolvedImage = sigmoid(convolvedImage);
%     
%     convolvedFeatures(:, :, filterNum, imageNum) = gather(convolvedImage);
%   end
% end

end

池化(Pooling)

下面这张动图很好解释了池化操作:

[图片上传失败...(image-359be6-1565444895291)]
池化可以降低特征维数,降低计算量吧!下面是我的cnnPool.m代码,用了mean函数和conv2函数都可以实现池化,我把其中一种注释了:

function pooledFeatures = cnnPool(poolDim, convolvedFeatures)
%                                   3        21*21*100*8
%cnnPool Pools the given convolved features
%
% Parameters:
%  poolDim - dimension of pooling region
%  convolvedFeatures - convolved features to pool (as given by cnnConvolve)
%                      convolvedFeatures(imageRow, imageCol, featureNum, imageNum)
%
% Returns:
%  pooledFeatures - matrix of pooled features in the form
%                   pooledFeatures(poolRow, poolCol, featureNum, imageNum)
%     

numImages = size(convolvedFeatures, 4);
numFilters = size(convolvedFeatures, 3);
convolvedDim = size(convolvedFeatures, 1);

pooledFeatures = zeros(convolvedDim / poolDim, ...
        convolvedDim / poolDim, numFilters, numImages); % 7*7*100*8

% Instructions:
%   Now pool the convolved features in regions of poolDim x poolDim,
%   to obtain the 
%   (convolvedDim/poolDim) x (convolvedDim/poolDim) x numFeatures x numImages 
%   matrix pooledFeatures, such that
%   pooledFeatures(poolRow, poolCol, featureNum, imageNum) is the 
%   value of the featureNum feature for the imageNum image pooled over the
%   corresponding (poolRow, poolCol) pooling region. 
%   
%   Use mean pooling here.

%%% YOUR CODE HERE %%%
%% METHOD1:Using mean to pool
% for imageNum = 1:numImages
%   for filterNum = 1:numFilters
%       pooledImage = zeros(convolvedDim / poolDim, convolvedDim / poolDim);
%       im = convolvedFeatures(:,:,filterNum, imageNum);
%       for i=1:(convolvedDim / poolDim)
%           for j=1:(convolvedDim / poolDim)
%               pooledImage(i,j) = mean(mean(im((i-1)*poolDim+1:i*poolDim,(j-1)*poolDim+1:j*poolDim)));
%           end
%       end
%       
%       pooledFeatures(:,:,filterNum, imageNum) = pooledImage;
%   end
% end
%%======================================================================
%% METHOD2:Using conv2 as well to pool
% (if numImages is large,this method may be better,can use "gpuArray.conv2"to speed up!)
pool_filter = 1/(poolDim*poolDim) * ones(poolDim,poolDim);
for imageNum = 1:numImages
  for filterNum = 1:numFilters
      pooledImage = zeros(convolvedDim / poolDim, convolvedDim / poolDim);
      im = convolvedFeatures(:,:,filterNum, imageNum);
      for i=1:(convolvedDim / poolDim)
          for j=1:(convolvedDim / poolDim)
              temp = conv2(im,pool_filter,'valid');
              pooledImage(i,j) = temp(poolDim*(i-1)+1,poolDim*(j-1)+1);
          end
      end
      
      pooledFeatures(:,:,filterNum, imageNum) = pooledImage;
  end
end
end

运行结果(这个练习偏简单,只是测试一下,为之后卷积神经网络打铺垫的):

卷积池化结果
有理解不到位之处,还请指出,有更好的想法,可以在下方评论交流!

相关文章

网友评论

    本文标题:UFLDL新版教程与编程练习(七):Convolution an

    本文链接:https://www.haomeiwen.com/subject/jlydjctx.html