美文网首页
【ML】Neural Network Architecture

【ML】Neural Network Architecture

作者: 盐果儿 | 来源:发表于2022-06-27 02:11 被阅读0次

    Perceptron(感知器): The simplest kind of neural network is a single-layer perceptron network, which consists of a single layer of output nodes; the inputs are fed directly to the outputs via a series of weights. The sum of the products of the weights and the inputs is calculated in each node, and if the value is above some threshold (typically 0) the neuron fires and takes the activated value (typically 1); otherwise it takes the deactivated value (typically -1). Neurons with this kind of activation function are also called artificial neurons or linear threshold units. In the literature, the term perceptron often refers to networks consisting of just one of these units.

    Multilayer Perceptron(多层感知器): A Multilayer Perceptron (MLP) contains one or more hidden layers (apart from one input and one output layer). While a single-layer perceptron can only learn linear functions, a multi-layer perceptron can also learn non-linear functions. A multilayer perceptron is a special case of a feedforward neural network where every layer is a fully-connected layer, and in some definitions, the number of nodes in each layer is the same.  Further, in many definitions, the activation function across hidden layers is the same.

    Feedforward Neural Network (前馈神经网络 FNN): FNN is an artificial neural network wherein connections between the nodes do not form a cycle. As such, it is different from its descendant: recurrent neural networks. The feedforward neural network was the first and simplest type of artificial neural network devised. In this network, the information moves in only one direction—forward—from the input nodes, through the hidden nodes (if any), and to the output nodes. There are no cycles or loops in the network. Feed-forward is architecture. The contrary one is Recurrent Neural Networks.

    Recurrent Neural Network/Feed Back Neural Network(反馈深度网络): A recurrent neural network (RNN) is a class of artificial neural networks where connections between nodes form a directed or undirected graph along a temporal sequence. This allows it to exhibit temporal dynamic behavior. Derived from feedforward neural networks, RNNs can use their internal state (memory) to process variable-length sequences of inputs. This makes them applicable to tasks such as unsegmented, connected handwriting recognition, or speech recognition. Recurrent neural networks are theoretically Turing complete and can run arbitrary programs to process arbitrary sequences of inputs.

    (Recurrent Neural Network: https://blog.csdn.net/bestrivern/article/details/90723524)

    Back Propagation Algorithm: Back Propagation (BP) is a solving method. BP can solve both feed-forward and Recurrent Neural Networks.

    Turing Complete(图灵完备性): In computability theory, a system of data-manipulation rules (such as a computer's instruction set, a programming language, or a cellular automaton) is said to be Turing-complete or computationally universal if it can be used to simulate any Turing machine. This means that this system is able to recognize or decide other data-manipulation rule sets. Turing completeness is used as a way to express the power of such a data-manipulation rule set. Virtually all programming languages today are Turing-complete.

    The Threshold Logic Unit(TLU) Algorithm develops a weight matrix and a threshold matrix that describes lines that separate the various class inputs.

    参数:隐层数,每层结点,正则化和非线性函数的选择

    相关文章

      网友评论

          本文标题:【ML】Neural Network Architecture

          本文链接:https://www.haomeiwen.com/subject/hxkjdrtx.html