损失函数
1.MAE
MAE(Mean Absolute Error,平均绝对误差)通常也被称为 L1-Loss,通过对预测值和真实值之间的绝对差取平均值来衡量他们之间的差异。。
MAE的公式如下:
MAE = 1 n ∑ i = 1 n ∣ y i − y ^ i ∣ \text{MAE} = \frac{1}{n} \sum_{i=1}^{n} \left| y_i - \hat{y}_i \right| MAE=n1i=1∑n∣yi−y^i∣
其中:
- n n n 是样本的总数。
- $ y_i $是第 i i i 个样本的真实值。
- $ \hat{y}_i$ 是第 i i i 个样本的预测值。
- ∣ y i − y ^ i ∣ \left| y_i - \hat{y}_i \right| ∣yi−y^i∣ 是真实值和预测值之间的绝对误差。
import torchdef test01():l1_loss_fn = torch.nn.L1Loss()y_true = torch.tensor([1,2,3,4,5,6],dtype=torch.float32)y_pred = torch.tensor([10,20,30,40,50,60],dtype=torch.float32)loss = l1_loss_fn(y_true,y_pred)print(loss)if __name__ == '__main__':test01()
2.MSE
均方差损失,也叫L2Loss。
MSE(Mean Squared Error,均方误差)通过对预测值和真实值之间的误差平方取平均值,来衡量预测值与真实值之间的差异。
MSE的公式如下:
MSE = 1 n ∑ i = 1 n ( y i − y ^ i ) 2 \text{MSE} = \frac{1}{n} \sum_{i=1}^{n} \left( y_i - \hat{y}_i \right)^2 MSE=n1i=1∑n(yi−y^i)2
其中:
- n n n 是样本的总数。
- $ y_i $ 是第 i i i 个样本的真实值。
- $ \hat{y}_i $ 是第 i i i 个样本的预测值。
- ( y i − y ^ i ) 2 \left( y_i - \hat{y}_i \right)^2 (yi−y^i)2 是真实值和预测值之间的误差平方。
import torchdef test01():l1_loss_fn = torch.nn.MSELoss()y_true = torch.tensor([1,2,3,4,5,6],dtype=torch.float32)y_pred = torch.tensor([2,3,4,5,6,8],dtype=torch.float32)loss = l1_loss_fn(y_true,y_pred)print(loss)if __name__ == '__main__':test01()
3.SmoothL1Loss
SmoothL1Loss可以做到在损失较小时表现为 L2 损失,而在损失较大时表现为 L1 损失。
SmoothL1Loss 的公式如下:
SmoothL1Loss ( x ) = { 0.5 ⋅ x 2 , if ∣ x ∣ < 1 ∣ x ∣ − 0.5 , otherwise \text{SmoothL1Loss}(x) = \begin{cases} 0.5 \cdot x^2, & \text{if } |x| < 1 \\ |x| - 0.5, & \text{otherwise} \end{cases} SmoothL1Loss(x)={0.5⋅x2,∣x∣−0.5,if ∣x∣<1otherwise
其中, x x x 表示预测值和真实值之间的误差,即 x = y i − y ^ i x = y_i - \hat{y}_i x=yi−y^i。
所有样本的平均损失为:
L = 1 n ∑ i = 1 n L i \\L=\frac{1}{n} \sum_{i=1}^{n} L_{i} L=n1i=1∑nLi
import torchdef test01():l1_loss_fn = torch.nn.SmoothL1Loss()l1_loss_fn1 =torch.nn.functional.smooth_l1_lossy_true = torch.tensor([1,2,3,4],dtype=torch.float32)y_pred = torch.tensor([3,2.5,3.5,4.5],dtype=torch.float32)loss = l1_loss_fn(y_true,y_pred)print(loss)if __name__ == '__main__':test01()
4.CrossEntropyLoss
交叉熵损失函数,使用在输出层使用softmax激活函数进行多分类时,一般都采用交叉熵损失函数。
对于多分类问题,CrossEntropyLoss 公式如下:
CrossEntropyLoss ( y , y ^ ) = − ∑ i = 1 C y i log ( y ^ i ) \text{CrossEntropyLoss}(y, \hat{y}) = - \sum_{i=1}^{C} y_i \log(\hat{y}_i) CrossEntropyLoss(y,y^)=−i=1∑Cyilog(y^i)
其中:
- C C C 是类别的总数。
- $ y $$ 是真实标签的one-hot编码向量,表示真实类别。
- y ^ \hat{y} y^ 是模型的输出(经过 softmax 后的概率分布)。
- y i y_i yi 是真实类别的第 i i i 个元素(0 或 1)。
- $ \hat{y}_i $ 是预测的类别概率分布中对应类别 i i i 的概率。
import torch
import torch.nn as nndef test01():l1_loss_fn = torch.nn.CrossEntropyLoss()one_hot = torch.tensor([[1.5,2.0,0.5],[0.5,1.0,1.5]])y_pred = torch.tensor([1,2])loss = l1_loss_fn(one_hot,y_pred)print(loss)if __name__ == '__main__':test01()
5.BCELoss
二分类交叉熵损失函数,使用在输出层使用sigmoid激活函数进行二分类时。
对于二分类问题,CrossEntropyLoss 的简化版本称为二元交叉熵(Binary Cross-Entropy Loss),公式为:
BinaryCrossEntropy ( y , y ^ ) = − [ y log ( y ^ ) + ( 1 − y ) log ( 1 − y ^ ) ] \text{BinaryCrossEntropy}(y, \hat{y}) = - \left[ y \log(\hat{y}) + (1 - y) \log(1 - \hat{y}) \right] BinaryCrossEntropy(y,y^)=−[ylog(y^)+(1−y)log(1−y^)]
log的底数一般默认为e,y是真实类别目标,根据公式可知L是一个分段函数 :
L = − l o g ( s i g m o i d 激活值 ) , 当 y = 1 L = − l o g ( 1 − s i g m o i d 激活值 ) , 当 y = 0 L = -log(sigmoid激活值), 当y=1 \\ L = -log(1-sigmoid激活值), 当y=0 L=−log(sigmoid激活值),当y=1L=−log(1−sigmoid激活值),当y=0
import torchdef test01():#样本x = torch.tensor([[0.1,0.2,0.3],[0.4,0.5,0.6],[0.41,0.52,0.63],[0.41,0.51,0.61]])#权重w = torch.tensor([[0.11,0.22,0.33],[0.44,0.55,0.66],[0.41,0.53,0.63],[0.4,0.1,0.6]])#偏置b = 0.1#预测y = w*x +b#激活y_pred = torch.nn.functional.sigmoid(y)print(y_pred)y_true = torch.tensor([[1,0,0],[1,0,0],[0,0,1],[0,0,1]],dtype=torch.float32)loss_fn = torch.nn.BCELoss()loss_fn1 = torch.nn.functional.binary_cross_entropyloss = loss_fn1(y_pred,y_true)print(loss)if __name__ == '__main__':test01()
BP算法
误差反向传播算法(BP)的基本步骤:
- 前向传播:正向计算得到预测值。
- 计算损失:通过损失函数 L ( y pred , y true ) L(y_{\text{pred}}, y_{\text{true}}) L(ypred,ytrue) 计算预测值和真实值的差距。
- 梯度计算:反向传播的核心是计算损失函数 L L L 对每个权重和偏置的梯度。
- 更新参数:一旦得到每层梯度,就可以使用梯度下降算法来更新每层的权重和偏置,使得损失逐渐减小。
- 迭代训练:将前向传播、梯度计算、参数更新的步骤重复多次,直到损失函数收敛或达到预定的停止条件。
1.前向传播
既把数据按照输入层→神经元→输出层逐级传递
1.输入层到隐藏层
z ( 1 ) = W 1 ⋅ x + b 1 z^{(1)} = W_1 \cdot x + b_1 z(1)=W1⋅x+b1
a ( 1 ) = σ ( z ( 1 ) ) a^{(1)} = \sigma(z^{(1)}) a(1)=σ(z(1))
2.隐藏层到输出层
z ( 2 ) = W 2 ⋅ a ( 1 ) + b 2 z^{(2)} = W_2 \cdot a^{(1)} + b_2 z(2)=W2⋅a(1)+b2
y pred = a ( 2 ) = σ ( z ( 2 ) ) y_{\text{pred}} = a^{(2)} = \sigma(z^{(2)}) ypred=a(2)=σ(z(2))
2.反向传播
反向传播是训练神经网络时常用的一种算法,用于高效地计算损失函数关于网络权重的梯度。通过这些梯度,可以使用梯度下降等优化算法来更新网络的权重,从而最小化损失函数。反向传播的核心思想是利用链式法则来逐层传递误差信息,从输出层向输入层反向传播。
'''
手动实现反向传播
'''
import torchi1 = 0.05
i2 = 0.10
b1 = 0.35
def h1():w1 = torch.tensor(0.15)w2 = torch.tensor(0.20)l1 = i1*w1 + i2*w2 +b1return torch.sigmoid(l1)#1+torch.e**(-l1))**-1
print('h1神经元的输入结果:',h1())def h2():w3 = torch.tensor(0.25)w4 = torch.tensor(0.30)l2 = i1*w3 + i2*w4 +b1return torch.sigmoid(l2)#(1+torch.e**(-l2))**(-1)
print('h2神经元的输入结果:',h2())b2 = 0.60
def o1():w5 = torch.tensor(0.4)w6 = torch.tensor(0.45)l3 = h1()*w5 + h2()*w6 + b2print('l3',l3)m1 = torch.sigmoid(l3)#(1+torch.e**(-l3))**(-1)return m1
print('o1神经元的输入结果:',o1())def o2():w7 = torch.tensor(0.5)w8 = torch.tensor(0.55)l4 = h1()*w7 + h2()*w8 + b2print('l4',l4)m2 = torch.sigmoid(l4)#(1+torch.e**(-l4))**(-1)return m2
print('o2神经元的输入结果:',o2())def mse():o1_target = 0.01o2_target = 0.99return 0.5*((o1()-0.01)**2+(o2()-0.99)**2)
loss = mse()#loss是一个具体值#目标是求loss对w5求导
#求w5的梯度 ==> loss对o1()求导 ==> o1()对l3求导 ==> l3对w5再求导
#(m1-o1_target) * (sigmoid(l3)*(1-sigmoid(l3) * h1
#(0.7513650695523157-0.01) * (sigmoid(1.1059)*(1-sigmoid(1.1059) * 0.5933
dw5 = (0.7513650695523157-0.01) * (0.7513650695523157*(1-0.7513650695523157)) * 0.5933
print('dw5',dw5)#0.08217119661422286#目标是求loss对w5求导
#求w7的梯度 ==> loss对o2()求导 ==> o2()对l4求导 ==> l4对w7再求导
#(m2-o2_target) * (sigmoid(l4)*(1-sigmoid(l4) * h1
#(0.7729-0.99) * (sigmoid(1.2249)*(1-sigmoid(1.2249) * 0.5933
dw7 = (0.7729-0.99) * (0.7729*(1-0.7729)) * 0.5933
print('dw7',dw7)#0.08217119661422286#目标是求loss对w1求导
#求w1的梯度 ==> loss对o1()求导 ==> o1()对l3求导 ==> l3对h1()求导 ==> h1()对l1求导 ==> l1对w1求导
#(m1-o1_target) * (sigmoid(l3)*(1-sigmoid(l3) * w5 * sigmoid(l1)*(1-sigmoid(l1)) * i1
dw1 = (0.7513650695523157-0.01) * (0.7513650695523157*(1-0.7513650695523157)) * 0.4 * 0.5933*(1-0.5933) * 0.05
print('dw1',dw1)
dw1 = 0.0004
lr = 0.5
w1 = 0.15 - lr*dw1
w5 = 0.40 - lr*dw5
w7 = 0.5 -lr*dw7
print('w1_new',w1)
print('w5_new',w5)
print('w7_new',w7)```python
'''
标准反向传播方式1
'''
import torchclass mynet(torch.nn.Module):def __init__(self):super(mynet, self).__init__()# 定义网络结构self.linear1 = torch.nn.Linear(2, 2)self.linear2 = torch.nn.Linear(2, 2)self.activation = torch.nn.Sigmoid()# 初始化参数self.linear1.weight.data = torch.tensor([[0.15, 0.20],[0.25, 0.30]])self.linear2.weight.data = torch.tensor([[0.40, 0.45],[0.50, 0.55]])self.linear1.bias.data = torch.tensor([0.35, 0.35])self.linear2.bias.data = torch.tensor([0.60, 0.60])def forward(self, x):x = self.linear1(x)x = self.activation(x)x = self.linear2(x)x = self.activation(x)return xdef train():model = mynet()optimizer = torch.optim.SGD(model.parameters(), lr=0.1)input = torch.tensor([[0.05, 0.1]])target = torch.tensor([[0.01, 0.99]])pred = model(input) # torch.nn.Module父系已经实现了调用forward前向传播函数mes = torch.nn.MSELoss()loss = mes(pred, target)loss.backward()print(model.linear1.weight.grad)print(model.linear2.weight.grad)optimizer.step()# model.linear1.weight.data -= 0.1*model.linear1.weight.grad# model.linear2.weight.data -= 0.1 * model.linear2.weight.gradtrain()
'''
标准反向传播方式2
'''
import torchclass mynet(torch.nn.Module):def __init__(self):super(mynet, self).__init__()# 定义网络结构self.hide1 = torch.nn.Sequential(torch.nn.Linear(2,2),torch.nn.Sigmoid())self.out = torch.nn.Sequential(torch.nn.Linear(2, 2), torch.nn.Sigmoid())# self.linear1 = torch.nn.Linear(2, 2)# self.linear2 = torch.nn.Linear(2, 2)# self.activation = torch.nn.Sigmoid()# 初始化参数self.hide1[0].weight.data = torch.tensor([[0.15, 0.20],[0.25, 0.30]])self.out[0].weight.data = torch.tensor([[0.40, 0.45],[0.50, 0.55]])self.hide1[0].bias.data = torch.tensor([0.35, 0.35])self.out[0].bias.data = torch.tensor([0.60, 0.60])def forward(self, x):x = self.hide1(x)x = self.out(x)return xdef train():model = mynet()optimizer = torch.optim.SGD(model.parameters(), lr=0.1)input = torch.tensor([[0.05, 0.1]])target = torch.tensor([[0.01, 0.99]])pred = model(input) # torch.nn.Module父系已经实现了调用forward前向传播函数mes = torch.nn.MSELoss()loss = mes(pred, target)loss.backward()print(model.hide1[0].weight.grad)print(model.out[0].weight.grad)optimizer.step()# model.linear1.weight.data -= 0.1*model.linear1.weight.grad# model.linear2.weight.data -= 0.1 * model.linear2.weight.gradtrain()