本文总结了机器学习中线性回归、逻辑回归、神经网络等基本知识(持续更新中…)
单特征线性回归
Hypothesis function:
误差:
Cost function(最小二乘原理):
Object function(goal):
Gradient descent(梯度下降):
结果:得到局部最优解
Gradient descent algorithm:
Simultaneous update:
特点:
- need to choose \alpha
- need many iterations
- works well even when n is large
Normal equation(正规方程):
结果:求代价函数的导数为0时theta的值
X的每一行对应一个单独的训练样本,列满秩:
y包含所有训练集中的标签:
矩阵形式的代价函数:
正规方程组:
特点:
- no need to choose \alpha
- no need to iterate
- need to compute (X^TX)^-1
- slow if n is large
资料:
多特征线性回归
Hypothesis:
向量化:
Cost function:
Gradient descent:
Feature Scaling: Make sure features are on a similar scale.
Mean normaization: make features have approximately zero mean.
逻辑回归
y=1 if h>=0.5
y=0 if h<0.5
决策边界:
损失函数: