8. Neural Networks: Representati

作者: 玄语梨落 | 来源:发表于2020-08-19 21:54 被阅读0次

Neural Networks: Representation

Non-linear hypotheses

Neural Networks.

Neurons and the brain

Origins: Algorithms that try to mimic the brain.
Was very widely used in 80s and early 90s; popularity diminished in late 90s.
Recent resurgence: State-of-the-art technique for many applications.

the 'one learning algorithm' hypothesis

Sensor representations in the brain.

Model representation I

include

  • Input layer
  • Hidden layer (Not only one)
  • Ouput layer

a_i^{(j)} = "activation" of unit i in layer j
\Theta^{(j)} = matrix of weights controling function mapping from layer j to layer j+1

If network have s_j units in layer j, s_{j+1} units in layer j+1, then \Theta^{(j)} will be of dimension s_{j+1}\times(s_j+1)

Examples and intuitions I

[图片上传失败...(image-1a8e61-1597643926031)]

Multi-class classification

[图片上传失败...(image-475743-1597643926031)]

相关文章

网友评论

    本文标题:8. Neural Networks: Representati

    本文链接:https://www.haomeiwen.com/subject/obxgdktx.html