Next, we need to understand it by using Math:
![](https://img.haomeiwen.com/i1873837/448c1b3f3c4ee654.png)
Here is a picture of a typical Neuron Network. There are 4 layers: Layer 1 is called input layer; Layer2&3 are called hidden layers which compute intermidate features that lead us to a more meaningful(abstract?) result; Layers 4 is called output layer.
![](https://img.haomeiwen.com/i1873837/98d83547836e845a.png)
We want to compute values in Layer2. How do we do that? Suppose we are given a 3x4 matrix Big-theta(1) alreday. Then a(2) is just g(theta*a(1)), done!
Next,
![](https://img.haomeiwen.com/i1873837/0f5ad53ee64129e3.png)
We can do even more complicated things by Neuron Network, such as logic arithmetics.
By taking a advantage sigmoid(x) function, we can approximate every logic arithmetic units.
I believe the picture above explains everything!
Finally,
![](https://img.haomeiwen.com/i1873837/e6c44724311d376d.png)
Packing up all cool thing we've learned so far, it's time to realize the ture power of multiclassification. How do we do that?
We simply make them distinct vectors like [1,0,0,0],[0,1,0,0],[0,0,1,0],etc
You should look for my complete code of this section : https://github.com/yhyu13/Coursera-Machine-Learning-Andrew-Ng (file name: predict.m)
网友评论