introduction
-
supervised learning(with labels)
regressing
classification -
unsupervised learning(no labels or same label)
clustering
univariate (one variable) linear regressing (supervised learning)
-
m:
numbers of training examples
x's:
input variable/features
y's:
output variable/targets variable
e.g.
:single training example
:training example - regressing
Hypothesis
:
Parameters
:
cost function:
(←this is a square error function,also the most commonly used one for regression problems)
goal
:
simplify hypothesis as
hypothesis as
"Batch"Gradient descent("Batch"梯度下降) with one variable
Batch:每一步梯度下降均用到了整个样本(中有对均方误差的累加
)
have
some functions
want
min
outline
:1.start with some (commonly they are all zeros) 2.keep changing to reduce until we hopefully end up at a mininum
simplify hypothesis as 梯度下降公式中,导数项的含义 α的取值对梯度下降的影响(如果Θ已经取到局部最小值,由于导数项为0,解将一直保持在局部最小值)
simplify hypothesis as
最后,将梯度下降算法中得到的parameters代入,就能得到最优解线性拟合函数
Matrices and vectors(回顾)
-
Vector: An n x 1 matrix (in this course)
e.g. element,() -
matrices addition (略)
-
scalar multiplication
-
matrices multiplication
calculate all of predicted prices at the same time(单个假设函数)
Houses sizes:
2104
1416
1534
852
hypothesis:
(prediction = DataMatrix * parameters)
多个假设函数
-
properties of matrices multiplication
in general, expect
-
matrices inverse (逆矩阵)
if A is an
m x m
matrix, and if it has an inverse
如果一个矩阵没有逆矩阵,贼该矩阵为奇异矩阵(singular)
、退化矩阵(degenerate)
如何手工求解
逆矩阵?
,
行列式:
伴随矩阵:代入求解逆矩阵,但是一般用库求解
-
matrix transpose(转置矩阵) 略
网友评论