美文网首页
SOFTMAX激励函数

SOFTMAX激励函数

作者: skullfang | 来源:发表于2018-01-08 16:08 被阅读0次

前言

这也是卷积神经网络中的一个组件。一般作为最后一层作为输出。其输入可以为一个全链接层。

表达式

image.png

特性

image.png

也就意味着最后一层的输出加和是等于1的。做 分类很好用,出来的就是是该标签的概率。

代码

# -*- coding: utf-8 -*-
# @Time    : 2017/12/4 上午10:22
# @Author  : SkullFang
# @Email   : yzhang.private@gmail.com
# @File    : SoftmaxLayer.py
# @Software: PyCharm

import cPickle
import gzip
import numpy as np
import theano
import theano.tensor as T
from theano.tensor import shared_randomstreams
from theano.tensor.nnet import sigmoid
from theano.tensor.signal.pool import pool_2d
from theano.tensor.nnet import conv
from theano.tensor.nnet import softmax

GPU=False
if GPU:
    print "Trying to run under a GPU"

    try:
        theano.config.device='gpu'
    except:
        pass
    theano.config.floatX='float32'
else:
    print "Running with a Cpu"
class SoftmaxLayer(object):
    def __init__(self,n_in,n_out,p_dropout=0.0):
        """
        :param n_in: 输入神经元 
        :param n_out: 输出神经元
        :param p_dropout: 
        """
        self.n_in=n_in
        self.n_out=n_out
        self.p_dropout=p_dropout
        self.w=theano.shared(
            np.zeros((n_in,n_out),dtype=theano.config.floatX),name='w',borrow=True
        )
        self.b=theano.shared(
            np.zeros((n_out,),dtype=theano.config.floatX),name='b',borrow=True
        )
        self.params=[self.w,self.b]


    def set_inpt(self,inpt,inpt_dropout,mini_batch_size):
        self.inpt = inpt.reshape((mini_batch_size, self.n_in))
        self.output = softmax((1 - self.p_dropout) * T.dot(self.inpt, self.w) + self.b)
        self.y_out = T.argmax(self.output, axis=1)
        self.inpt_dropout = dropout_layer(
            inpt_dropout.reshape((mini_batch_size, self.n_in)), self.p_dropout)
        self.output_dropout = softmax(T.dot(self.inpt_dropout, self.w) + self.b)

    def cost(self, net):
        "Return the log-likelihood cost."
        return -T.mean(T.log(self.output_dropout)[T.arange(net.y.shape[0]), net.y])

    def accuracy(self, y):
        return T.mean(T.eq(y, self.y_out))


def size(data):
    return data[0].get_value(borrow=True).shape[0]

def dropout_layer(layer, p_dropout):
    srng = shared_randomstreams.RandomStreams(
        np.random.RandomState(0).randint(999999))
    mask = srng.binomial(n=1, p=1 - p_dropout, size=layer.shape)
    return layer * T.cast(mask, theano.config.floatX)




相关文章

网友评论

      本文标题:SOFTMAX激励函数

      本文链接:https://www.haomeiwen.com/subject/ioptixtx.html