美文网首页深度学习iOS Developer程序员
RNN(循环神经网络)训练手写数字

RNN(循环神经网络)训练手写数字

作者: Jiao123 | 来源:发表于2016-11-11 11:40 被阅读3006次

    简介


    RNN(recurrent neural network )循环(递归)神经网络主要用来处理序列数据。因为传统的神经网络从输入-隐含层-输出是全连接的,层中的神经元是没有连接的,所以对于输入数据本身具有时序性(例如输入的文本数据,每个单词之间有一定联系)的处理表现并不理想。而RNN每一个输出与前面的输出建立起关联,这样就能够很好的处理序列化的数据。
    单纯循环神经网络也面临一些问题,如无法处理随着递归,权重指数级爆炸或消失的问题,难以捕捉长期时间关联。这些可以结合不同的LSTM很好的解决这个问题。
    本文主要介绍简单的RNN用OC的实现,并通过训练MNIST数据来检测模型。后面有时间再介绍LSTM的实现。

    公式


    简单的RNN就三层,输入-隐含层-输出,如下:

    将其展开的模型如下:

    其中,A这个隐含层的操作就是将当前输入与前面的输出相结合,然后激活就得到当前状态信号。如下:

    计算公式如下:

    其中Xt是输入数据序列,St是的状态序列,V*St就是图中Ot输出,softmax运算并没有画出来。

    由于RNN结构简单,反向传播的公式结合一点数理知识就可以求得,这里就不列出,详见代码实现。

    数据处理


    由于没找到比较好的训练数据,这里用的是前面《OC实现Softmax识别手写数字》文章里面的MNIST数据源。输入数据处理、softmax实现也都是复用的。
    图片数据本质上并非是序列化的,我这里将图片的每行的的像素数据当作一个信号输入,如果一共N行,序列长度就是N。训练数据是28*28维的图片,那么就是每个信号是28*1,一共时间长度是28。

    RNN实现


    简单的RNN实现流程并不复杂,需要训练的参数就5个:输入的权值、神经元间转移的权值、输出的权值、以及两个转移和输出的偏置量。直接看代码:

    //
    //  MLRnn.m
    //  LSTM
    //
    //  Created by Jiao Liu on 11/9/16.
    //  Copyright © 2016 ChangHong. All rights reserved.
    //
    
    #import "MLRnn.h"
    
    @implementation MLRnn
    
    #pragma mark - Inner Method
    
    + (double)truncated_normal:(double)mean dev:(double)stddev
    {
        double outP = 0.0;
        do {
            static int hasSpare = 0;
            static double spare;
            if (hasSpare) {
                hasSpare = 0;
                outP = mean + stddev * spare;
                continue;
            }
            
            hasSpare = 1;
            static double u,v,s;
            do {
                u = (rand() / ((double) RAND_MAX)) * 2.0 - 1.0;
                v = (rand() / ((double) RAND_MAX)) * 2.0 - 1.0;
                s = u * u + v * v;
            } while ((s >= 1.0) || (s == 0.0));
            s = sqrt(-2.0 * log(s) / s);
            spare = v * s;
            outP = mean + stddev * u * s;
        } while (fabsl(outP) > 2*stddev);
        return outP;
    }
    
    + (double *)fillVector:(double)num size:(int)size
    {
        double *outP = malloc(sizeof(double) * size);
        vDSP_vfillD(&num, outP, 1, size);
        return outP;
        
    }
    
    + (double *)weight_init:(int)size
    {
        double *outP = malloc(sizeof(double) * size);
        for (int i = 0; i < size; i++) {
            outP[i] = [MLRnn truncated_normal:0 dev:0.1];
        }
        return outP;
    }
    
    + (double *)bias_init:(int)size
    {
        return [MLRnn fillVector:0.1f size:size];
    }
    
    + (double *)tanh:(double *)input size:(int)size
    {
        for (int i = 0; i < size; i++) {
            double num = input[i];
            if (num > 20) {
                input[i] = 1;
            }
            else if (num < -20)
            {
                input[i] = -1;
            }
            else
            {
                input[i] = (exp(num) - exp(-num)) / (exp(num) + exp(-num));
            }
        }
        return input;
    }
    
    #pragma mark - Init
    
    - (id)initWithNodeNum:(int)num layerSize:(int)size dataDim:(int)dim
    {
        self = [super init];
        if (self) {
            _nodeNum = num;
            _layerSize = size;
            _dataDim = dim;
            [self setupNet];
        }
        return self;
    }
    
    - (id)init
    {
        self = [super init];
        if (self) {
            [self setupNet];
        }
        return self;
    }
    
    - (void)setupNet
    {
        _inWeight = [MLRnn weight_init:_nodeNum * _dataDim];
        _outWeight = [MLRnn weight_init:_nodeNum * _dataDim];
        _flowWeight = [MLRnn weight_init:_nodeNum * _nodeNum];
        _outBias = calloc(_dataDim, sizeof(double));
        _flowBias = calloc(_nodeNum, sizeof(double));
        _output = calloc(_layerSize * _dataDim, sizeof(double));
        _state = calloc(_layerSize * _nodeNum, sizeof(double));
    }
    
    #pragma mark - Main Method
    
    - (double *)forwardPropagation:(double *)input
    {
        _input = input;
        // clean data
        double zero = 0;
        vDSP_vfillD(&zero, _output, 1, _layerSize * _dataDim);
        vDSP_vfillD(&zero, _state, 1, _layerSize * _nodeNum);
        
        for (int i = 0; i < _layerSize; i++) {
            double *temp1 = calloc(_nodeNum, sizeof(double));
            double *temp2 = calloc(_nodeNum, sizeof(double));
            if (i == 0) {
                vDSP_mmulD(_inWeight, 1, (input + i * _dataDim), 1, temp1, 1, _nodeNum, 1, _dataDim);
                vDSP_vaddD(temp1, 1,_flowBias, 1, temp1, 1, _nodeNum);
            }
            else
            {
                vDSP_mmulD(_inWeight, 1, (input + i * _dataDim), 1, temp1, 1, _nodeNum, 1, _dataDim);
                vDSP_mmulD(_flowWeight, 1, (_state + (i-1) * _nodeNum), 1, temp2, 1, _nodeNum, 1, _nodeNum);
                vDSP_vaddD(temp1, 1, temp2, 1, temp1, 1, _nodeNum);
                vDSP_vaddD(temp1, 1,_flowBias, 1, temp1, 1, _nodeNum);
            }
            [MLRnn tanh:temp1 size:_nodeNum];
            vDSP_vaddD((_state + i * _nodeNum), 1, temp1, 1, (_state + i * _nodeNum), 1, _nodeNum);
            vDSP_mmulD(_outWeight, 1, temp1, 1, (_output + i * _dataDim), 1, _dataDim, 1, _nodeNum);
            vDSP_vaddD((_output + i * _dataDim), 1, _outBias, 1,  (_output + i * _dataDim), 1, _dataDim);
            
            free(temp1);
            free(temp2);
        }
        
        return _output;
    }
    
    - (void)backPropagation:(double *)loss
    {
        double *flowLoss = calloc(_nodeNum, sizeof(double));
        for (int i = _layerSize - 1; i >= 0 ; i--) {
            vDSP_vaddD(_outBias, 1, (loss + i * _dataDim), 1, _outBias, 1, _dataDim);
            double *transWeight = calloc(_nodeNum * _dataDim, sizeof(double));
            vDSP_mtransD(_outWeight, 1, transWeight, 1, _nodeNum, _dataDim);
            double *tanhLoss = calloc(_nodeNum, sizeof(double));
            vDSP_mmulD(transWeight, 1, (loss + i * _dataDim), 1, tanhLoss, 1, _nodeNum, 1, _dataDim);
            double *outWeightLoss = calloc(_nodeNum * _dataDim, sizeof(double));
            vDSP_mmulD((loss + i * _dataDim), 1, (_state + i * _nodeNum), 1, outWeightLoss, 1, _dataDim, _nodeNum, 1);
            vDSP_vaddD(_outWeight, 1, outWeightLoss, 1, _outWeight, 1, _nodeNum * _dataDim);
            
            double *tanhIn = calloc(_nodeNum, sizeof(double));
            vDSP_vsqD((_state + i * _nodeNum), 1, tanhIn, 1, _nodeNum);
            double *one = [MLRnn fillVector:1 size:_nodeNum];
            vDSP_vsubD(tanhIn, 1, one, 1, tanhIn, 1, _nodeNum);
            if (i != _layerSize - 1) {
                vDSP_vaddD(tanhLoss, 1, flowLoss, 1, tanhLoss, 1, _nodeNum);
            }
            vDSP_vmulD(tanhLoss, 1, tanhIn, 1, tanhLoss, 1, _nodeNum);
            
            vDSP_vaddD(_flowBias, 1, tanhLoss, 1, _flowBias, 1, _nodeNum);
            if (i != 0) {
                double *transFlow = calloc(_nodeNum * _nodeNum, sizeof(double));
                vDSP_mtransD(_flowWeight, 1, transFlow, 1, _nodeNum, _nodeNum);
                vDSP_mmulD(transFlow, 1, tanhLoss, 1, flowLoss, 1, _nodeNum, 1, _nodeNum);
                free(transFlow);
                double *flowWeightLoss = calloc(_nodeNum * _nodeNum, sizeof(double));
                vDSP_mmulD(tanhLoss, 1, (_state + (i-1) * _nodeNum), 1, flowWeightLoss, 1, _nodeNum, _nodeNum, 1);
                vDSP_vaddD(_flowWeight, 1, flowWeightLoss, 1, _flowWeight, 1, _nodeNum * _nodeNum);
                free(flowWeightLoss);
            }
    
            double *inWeightLoss = calloc(_nodeNum * _dataDim, sizeof(double));
            vDSP_mmulD(tanhLoss, 1, (_input + i * _dataDim), 1, inWeightLoss, 1, _nodeNum, _dataDim, 1);
            vDSP_vaddD(_inWeight, 1, inWeightLoss, 1, _inWeight, 1, _nodeNum * _dataDim);
            
            free(transWeight);
            free(tanhLoss);
            free(outWeightLoss);
            free(tanhIn);
            free(one);
            free(inWeightLoss);
        }
        free(flowLoss);
        free(loss);
    }
    
    @end
    

    很多初始化方法以及内部函数直接是复用《OC实现(CNN)卷积神经网络》中相关的方法。

    结语


    我这里使用RNN,迭代2500次,每次训练100张图片,单个神经元节点个数选择50,得到的正确率94%左右。

    有兴趣的朋友可以点这里看完整代码

    本文参考:

    1. Understanding LSTM Networks
    2. recurrent-neural-networks-tutorial

    相关文章

      网友评论

      • a9be856b6cd5:竟然不是python,心痛。。
      • Theendisthebegi:厉害厉害 :+1: 。。。虽然说是oc代码,可满满的都是c的风格。。。:grin:
        Jiao123:@Theendisthebegi 因为其他工程也用到了这个数据,不想保存多份,所以没有集成到工程里面:smile:
        Theendisthebegi:还有就是里面的image和label你都放到桌面上而不是项目里。。。怪不得我一运行就闪退==:grin:

      本文标题:RNN(循环神经网络)训练手写数字

      本文链接:https://www.haomeiwen.com/subject/btcfpttx.html