tensorflow案例4--人脸识别(损失函数选取,调用VGG16模型以及改进写法)

  • 🍨 本文为🔗365天深度学习训练营 中的学习记录博客
  • 🍖 原作者:K同学啊

前言

  • 这个模型结构算上之前的pytorch版本的,算是花了不少时间,但是效果一直没有达到理想情况,主要是验证集和训练集准确率差距过大;
  • 在猜想会不会是模型复杂性不够的问题,但是如果继续叠加,又会出现模型退化结果,解决方法,我感觉可以试一下ResNet模型,后面再更新吧;
  • 这一次对*VGG16的修改主要修改了3个方面,具体情况如下讲解;
  • 欢迎收藏 + 关注,本人将会持续更新。

1、知识讲解与API积累

1、模型改进与简介

VGG16模型

VGG16模型是一个很基础的模型,由13层卷积,3层全连接层构成,图示如下:

在这里插入图片描述

本次实验VGG16模型修改

  • 冻结前13层卷积,只修改全连接
  • 在全连接层添加BN层、全局平均池化层,起到降维作用,因为VGG16的计算量很大
  • 全连接层中添加Dropout层
  • 修改后代码:
# 导入官方VGG16模型
vgg16_model = tf.keras.applications.VGG16(include_top=False, weights='imagenet', input_shape=(256, 256, 3))# 冻结卷积权重
for layer in vgg16_model.layers:layer.trainble = False# 获取卷积层输出
x = vgg16_model.output# 添加BN层
x = layers.BatchNormalization()(x)# 添加平均池化,降低计算量
x = layers.GlobalAveragePooling2D()(x)# 添加全连接层和Dropout
x = layers.Dense(1024, activation='relu')(x)
x = layers.Dropout(0.5)(x)
x = layers.Dense(512, activation='relu')(x)
x = layers.Dropout(0.5)(x)predict = layers.Dense(len(classnames))(x)# 创建模型
model = models.Model(inputs=vgg16_model.input, outputs=predict)model.summary()

结果

最好结果loss: 0.0505 - accuracy: 0.9847 - val_loss: 3.7758 - val_accuracy: 0.4750,个人感觉想要继续提升精度,最简单方法,是结合ResNet网络, 这个后面我再试一下.

2、API积累

Loss函数

损失函数Loss详解(结合加载数据的时候,label_mode参数进行相应选取):

1. binary_crossentropy(对数损失函数)

sigmoid 相对应的损失函数,针对于二分类问题。

2. categorical_crossentropy(多分类的对数损失函数)

softmax 相对应的损失函数,如果是one-hot编码,则使用 categorical_crossentropy

Tensorflow加载VGG16模型

1. 加载包含顶部全连接层的 VGG16 模型
from tensorflow.keras.applications import VGG16# 加载包含顶部全连接层的 VGG16 模型,使用 ImageNet 预训练权重
model = VGG16(include_top=True, weights='imagenet', input_shape=(224, 224, 3))model.summary()
2. 加载不包含顶部全连接层的 VGG16 模型
from tensorflow.keras.applications import VGG16
from tensorflow.keras.layers import Dense, Flatten
from tensorflow.keras.models import Model# 加载不包含顶部全连接层的 VGG16 模型,使用 ImageNet 预训练权重
base_model = VGG16(include_top=False, weights='imagenet', input_shape=(224, 224, 3))# 冻结卷积基的权重(可选)
for layer in base_model.layers:layer.trainable = False# 获取卷积基的输出
x = base_model.output# 添加新的全连接层,*******  就这一步结合实际场景进行修改 ****
x = Flatten()(x)
x = Dense(256, activation='relu')(x)
predictions = Dense(2, activation='softmax')(x)  # 2 个输出类别# 创建新的模型
model = Model(inputs=base_model.input, outputs=predictions)model.summary()
3. 使用自定义输入张量
from tensorflow.keras.applications import VGG16
from tensorflow.keras.layers import Input# 定义输入张量
input_tensor = Input(shape=(224, 224, 3))# 加载 VGG16 模型,使用自定义输入张量
model = VGG16(include_top=True, weights='imagenet', input_tensor=input_tensor)model.summary()
4. 不加载预训练权重
from tensorflow.keras.applications import VGG16# 加载 VGG16 模型,不加载预训练权重
model = VGG16(include_top=True, weights=None, input_shape=(224, 224, 3))model.summary()

2、人脸识别求解

1、数据处理

1、导入库

import tensorflow as tf 
from tensorflow.keras import datasets, models, layers
import numpy as np # 获取所有GPU
gpus = tf.config.list_physical_devices("GPU")if gpus:gpu0 = gpus[0]   # 有多块,取一块tf.config.experimental.set_memory_growth(gpu0, True)   # 设置显存空间tf.config.set_visible_devices([gpu0], "GPU")  # 设置第一块gpus  # 输出GPU
[PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]

2、查看文件目录

在数据文件夹中,不同人的图像都存储在单独的文件夹中

import os, PIL, pathlibdata_dir = './data/'
data_dir = pathlib.Path(data_dir)# 获取所有改目文件夹下所有文件夹名字
classnames = os.listdir(data_dir)
classnames
['Angelina Jolie','Brad Pitt','Denzel Washington','Hugh Jackman','Jennifer Lawrence','Johnny Depp','Kate Winslet','Leonardo DiCaprio','Megan Fox','Natalie Portman','Nicole Kidman','Robert Downey Jr','Sandra Bullock','Scarlett Johansson','Tom Cruise','Tom Hanks','Will Smith']

3、数据划分

batch_size = 32train_ds = tf.keras.preprocessing.image_dataset_from_directory('./data/',batch_size=batch_size,shuffle=True,validation_split=0.2,   # 验证集 0.1,训练集 0.9subset='training',seed=42,label_mode='categorical',   # 使用独热编码编码数据分类image_size=(256, 256)
)val_ds = tf.keras.preprocessing.image_dataset_from_directory('./data/',batch_size=batch_size,shuffle=True,validation_split=0.2,seed=42,subset='validation',    image_size=(256, 256),label_mode='categorical'  # 使用独热编码对数据进行分类
)
Found 1800 files belonging to 17 classes.
Using 1440 files for training.
Found 1800 files belonging to 17 classes.
Using 360 files for validation.
# 输出数据维度
for image, label in train_ds.take(1):print(image.shape)print(label.shape)
(32, 256, 256, 3)
(32, 17)

4、数据展示

# 随机展示一批数据
import matplotlib.pyplot as plt plt.figure(figsize=(20,10))
for images, labels in train_ds.take(1):for i in range(20):plt.subplot(5, 10, i + 1)plt.imshow(images[i].numpy().astype("uint8"))plt.title(classnames[np.argmax(labels[i], axis=0)])    plt.axis('off')plt.show()


在这里插入图片描述

2、构建VGG16模型

VGG16修改:

  • 冻结前13层卷积,只修改全连接
  • 在全连接层添加BN层、全局平均池化层,起到降维作用,因为VGG16的计算量很大
  • 全连接层中添加Dropout层
# 导入官方VGG16模型
vgg16_model = tf.keras.applications.VGG16(include_top=False, weights='imagenet', input_shape=(256, 256, 3))# 冻结卷积权重
for layer in vgg16_model.layers:layer.trainble = False# 获取卷积层输出
x = vgg16_model.output# 添加BN层
x = layers.BatchNormalization()(x)# 添加平均池化,降低计算量
x = layers.GlobalAveragePooling2D()(x)# 添加全连接层和Dropout
x = layers.Dense(1024, activation='relu')(x)
x = layers.Dropout(0.5)(x)
x = layers.Dense(512, activation='relu')(x)
x = layers.Dropout(0.5)(x)predict = layers.Dense(len(classnames))(x)# 创建模型
model = models.Model(inputs=vgg16_model.input, outputs=predict)model.summary()
Model: "model"
_________________________________________________________________Layer (type)                Output Shape              Param #   
=================================================================input_1 (InputLayer)        [(None, 256, 256, 3)]     0         block1_conv1 (Conv2D)       (None, 256, 256, 64)      1792      block1_conv2 (Conv2D)       (None, 256, 256, 64)      36928     block1_pool (MaxPooling2D)  (None, 128, 128, 64)      0         block2_conv1 (Conv2D)       (None, 128, 128, 128)     73856     block2_conv2 (Conv2D)       (None, 128, 128, 128)     147584    block2_pool (MaxPooling2D)  (None, 64, 64, 128)       0         block3_conv1 (Conv2D)       (None, 64, 64, 256)       295168    block3_conv2 (Conv2D)       (None, 64, 64, 256)       590080    block3_conv3 (Conv2D)       (None, 64, 64, 256)       590080    block3_pool (MaxPooling2D)  (None, 32, 32, 256)       0         block4_conv1 (Conv2D)       (None, 32, 32, 512)       1180160   block4_conv2 (Conv2D)       (None, 32, 32, 512)       2359808   block4_conv3 (Conv2D)       (None, 32, 32, 512)       2359808   block4_pool (MaxPooling2D)  (None, 16, 16, 512)       0         block5_conv1 (Conv2D)       (None, 16, 16, 512)       2359808   block5_conv2 (Conv2D)       (None, 16, 16, 512)       2359808   block5_conv3 (Conv2D)       (None, 16, 16, 512)       2359808   block5_pool (MaxPooling2D)  (None, 8, 8, 512)         0         batch_normalization (BatchN  (None, 8, 8, 512)        2048      ormalization)                                                   global_average_pooling2d (G  (None, 512)              0         lobalAveragePooling2D)                                          dense (Dense)               (None, 1024)              525312    dropout (Dropout)           (None, 1024)              0         dense_1 (Dense)             (None, 512)               524800    dropout_1 (Dropout)         (None, 512)               0         dense_2 (Dense)             (None, 17)                8721      =================================================================
Total params: 15,775,569
Trainable params: 15,774,545
Non-trainable params: 1,024
_________________________________________________________________

3、模型训练

1、设置超参数

# 初始化学习率
learing_rate = 1e-3# 设置动态学习率
lr_schedule = tf.keras.optimizers.schedules.ExponentialDecay(learing_rate,decay_steps=60,   # 每60步衰减一次decay_rate=0.96,   # 原来的0.96staircase=True
)# 定义优化器
optimizer = tf.keras.optimizers.Adam(learning_rate=1e-3)
# 设置超参数
model.compile(optimizer=optimizer,loss=tf.keras.losses.CategoricalCrossentropy(from_logits=True),metrics=['accuracy'])

2、模型正式训练

from tensorflow.keras.callbacks import ModelCheckpoint, EarlyStoppingepoches = 100# 训练模型中最佳模型
checkpointer = ModelCheckpoint('best_model.h5',monitor='val_accuracy',  # 被检测参数verbose=1,save_best_only=True,save_weights_only=True
)# 设置早停
earlystopper = EarlyStopping(monitor='val_accuracy',verbose=1,  # 信息模型patience=20,  min_delta=0.01,  # 20次没有提示0.01,则停止
)history = model.fit(train_ds, validation_data=val_ds,epochs=epoches,callbacks=[checkpointer, earlystopper]   # 设置回调函数
)
Epoch 1/100
2024-11-01 19:31:48.093783: I tensorflow/stream_executor/cuda/cuda_dnn.cc:384] Loaded cuDNN version 8101
2024-11-01 19:31:50.361608: I tensorflow/stream_executor/cuda/cuda_blas.cc:1786] TensorFloat-32 will be used for the matrix multiplication. This will only be logged once.
45/45 [==============================] - ETA: 0s - loss: 2.8548 - accuracy: 0.0826
Epoch 1: val_accuracy improved from -inf to 0.13056, saving model to best_model.h5
45/45 [==============================] - 14s 205ms/step - loss: 2.8548 - accuracy: 0.0826 - val_loss: 8.0561 - val_accuracy: 0.1306
Epoch 2/100
45/45 [==============================] - ETA: 0s - loss: 2.7271 - accuracy: 0.1181
Epoch 2: val_accuracy did not improve from 0.13056
45/45 [==============================] - 6s 142ms/step - loss: 2.7271 - accuracy: 0.1181 - val_loss: 3.7047 - val_accuracy: 0.0639
Epoch 3/100
45/45 [==============================] - ETA: 0s - loss: 2.6583 - accuracy: 0.1354
Epoch 3: val_accuracy did not improve from 0.13056
45/45 [==============================] - 7s 144ms/step - loss: 2.6583 - accuracy: 0.1354 - val_loss: 8.0687 - val_accuracy: 0.0806
Epoch 4/100
45/45 [==============================] - ETA: 0s - loss: 2.5833 - accuracy: 0.1444
Epoch 4: val_accuracy did not improve from 0.13056
45/45 [==============================] - 6s 143ms/step - loss: 2.5833 - accuracy: 0.1444 - val_loss: 4.7184 - val_accuracy: 0.1000
Epoch 5/100
45/45 [==============================] - ETA: 0s - loss: 2.5115 - accuracy: 0.1576
Epoch 5: val_accuracy did not improve from 0.13056
45/45 [==============================] - 6s 142ms/step - loss: 2.5115 - accuracy: 0.1576 - val_loss: 61.5911 - val_accuracy: 0.0639
Epoch 6/100
45/45 [==============================] - ETA: 0s - loss: 2.4402 - accuracy: 0.1674
Epoch 6: val_accuracy did not improve from 0.13056
45/45 [==============================] - 6s 142ms/step - loss: 2.4402 - accuracy: 0.1674 - val_loss: 4.6790 - val_accuracy: 0.0944
Epoch 7/100
45/45 [==============================] - ETA: 0s - loss: 2.3911 - accuracy: 0.1951
Epoch 7: val_accuracy did not improve from 0.13056
45/45 [==============================] - 6s 143ms/step - loss: 2.3911 - accuracy: 0.1951 - val_loss: 2.7717 - val_accuracy: 0.1028
Epoch 8/100
45/45 [==============================] - ETA: 0s - loss: 2.3331 - accuracy: 0.1931
Epoch 8: val_accuracy did not improve from 0.13056
45/45 [==============================] - 6s 144ms/step - loss: 2.3331 - accuracy: 0.1931 - val_loss: 8.2605 - val_accuracy: 0.0639
Epoch 9/100
45/45 [==============================] - ETA: 0s - loss: 2.2922 - accuracy: 0.2021
Epoch 9: val_accuracy did not improve from 0.13056
45/45 [==============================] - 6s 143ms/step - loss: 2.2922 - accuracy: 0.2021 - val_loss: 51.5976 - val_accuracy: 0.0306
Epoch 10/100
45/45 [==============================] - ETA: 0s - loss: 2.2182 - accuracy: 0.2313
Epoch 10: val_accuracy did not improve from 0.13056
45/45 [==============================] - 6s 143ms/step - loss: 2.2182 - accuracy: 0.2313 - val_loss: 4.3942 - val_accuracy: 0.0611
Epoch 11/100
45/45 [==============================] - ETA: 0s - loss: 2.2049 - accuracy: 0.2361
Epoch 11: val_accuracy improved from 0.13056 to 0.17778, saving model to best_model.h5
45/45 [==============================] - 7s 145ms/step - loss: 2.2049 - accuracy: 0.2361 - val_loss: 2.4072 - val_accuracy: 0.1778
Epoch 12/100
45/45 [==============================] - ETA: 0s - loss: 2.1242 - accuracy: 0.2576
Epoch 12: val_accuracy improved from 0.17778 to 0.18056, saving model to best_model.h5
45/45 [==============================] - 7s 145ms/step - loss: 2.1242 - accuracy: 0.2576 - val_loss: 2.6218 - val_accuracy: 0.1806
Epoch 13/100
45/45 [==============================] - ETA: 0s - loss: 2.0634 - accuracy: 0.2639
Epoch 13: val_accuracy did not improve from 0.18056
45/45 [==============================] - 6s 142ms/step - loss: 2.0634 - accuracy: 0.2639 - val_loss: 14.2102 - val_accuracy: 0.1556
Epoch 14/100
45/45 [==============================] - ETA: 0s - loss: 2.0379 - accuracy: 0.2861
Epoch 14: val_accuracy did not improve from 0.18056
45/45 [==============================] - 6s 143ms/step - loss: 2.0379 - accuracy: 0.2861 - val_loss: 931.4739 - val_accuracy: 0.1556
Epoch 15/100
45/45 [==============================] - ETA: 0s - loss: 1.9782 - accuracy: 0.3063
Epoch 15: val_accuracy improved from 0.18056 to 0.21667, saving model to best_model.h5
45/45 [==============================] - 7s 144ms/step - loss: 1.9782 - accuracy: 0.3063 - val_loss: 2.3025 - val_accuracy: 0.2167
Epoch 16/100
45/45 [==============================] - ETA: 0s - loss: 1.9299 - accuracy: 0.3306
Epoch 16: val_accuracy did not improve from 0.21667
45/45 [==============================] - 6s 143ms/step - loss: 1.9299 - accuracy: 0.3306 - val_loss: 2.2587 - val_accuracy: 0.2000
Epoch 17/100
45/45 [==============================] - ETA: 0s - loss: 1.8289 - accuracy: 0.3590
Epoch 17: val_accuracy did not improve from 0.21667
45/45 [==============================] - 6s 143ms/step - loss: 1.8289 - accuracy: 0.3590 - val_loss: 2.5047 - val_accuracy: 0.1722
Epoch 18/100
45/45 [==============================] - ETA: 0s - loss: 1.7912 - accuracy: 0.3694
Epoch 18: val_accuracy did not improve from 0.21667
45/45 [==============================] - 6s 142ms/step - loss: 1.7912 - accuracy: 0.3694 - val_loss: 3.1102 - val_accuracy: 0.1722
Epoch 19/100
45/45 [==============================] - ETA: 0s - loss: 1.7762 - accuracy: 0.3764
Epoch 19: val_accuracy did not improve from 0.21667
45/45 [==============================] - 6s 142ms/step - loss: 1.7762 - accuracy: 0.3764 - val_loss: 2.7225 - val_accuracy: 0.2083
Epoch 20/100
45/45 [==============================] - ETA: 0s - loss: 1.7182 - accuracy: 0.3979
Epoch 20: val_accuracy did not improve from 0.21667
45/45 [==============================] - 6s 143ms/step - loss: 1.7182 - accuracy: 0.3979 - val_loss: 3.4486 - val_accuracy: 0.1528
Epoch 21/100
45/45 [==============================] - ETA: 0s - loss: 1.6341 - accuracy: 0.4208
Epoch 21: val_accuracy did not improve from 0.21667
45/45 [==============================] - 6s 143ms/step - loss: 1.6341 - accuracy: 0.4208 - val_loss: 2.7709 - val_accuracy: 0.1806
Epoch 22/100
45/45 [==============================] - ETA: 0s - loss: 1.5667 - accuracy: 0.4486
Epoch 22: val_accuracy did not improve from 0.21667
45/45 [==============================] - 6s 144ms/step - loss: 1.5667 - accuracy: 0.4486 - val_loss: 4.2764 - val_accuracy: 0.1583
Epoch 23/100
45/45 [==============================] - ETA: 0s - loss: 1.4579 - accuracy: 0.4875
Epoch 23: val_accuracy improved from 0.21667 to 0.26111, saving model to best_model.h5
45/45 [==============================] - 7s 147ms/step - loss: 1.4579 - accuracy: 0.4875 - val_loss: 32579.7422 - val_accuracy: 0.2611
Epoch 24/100
45/45 [==============================] - ETA: 0s - loss: 1.4373 - accuracy: 0.4854
Epoch 24: val_accuracy did not improve from 0.26111
45/45 [==============================] - 7s 145ms/step - loss: 1.4373 - accuracy: 0.4854 - val_loss: 8038.8555 - val_accuracy: 0.1972
Epoch 25/100
45/45 [==============================] - ETA: 0s - loss: 1.3630 - accuracy: 0.5139
Epoch 25: val_accuracy did not improve from 0.26111
45/45 [==============================] - 7s 145ms/step - loss: 1.3630 - accuracy: 0.5139 - val_loss: 2.3408 - val_accuracy: 0.2528
Epoch 26/100
45/45 [==============================] - ETA: 0s - loss: 1.3181 - accuracy: 0.5375
Epoch 26: val_accuracy did not improve from 0.26111
45/45 [==============================] - 7s 144ms/step - loss: 1.3181 - accuracy: 0.5375 - val_loss: 2.1877 - val_accuracy: 0.2500
Epoch 27/100
45/45 [==============================] - ETA: 0s - loss: 1.2544 - accuracy: 0.5583
Epoch 27: val_accuracy did not improve from 0.26111
45/45 [==============================] - 7s 144ms/step - loss: 1.2544 - accuracy: 0.5583 - val_loss: 2.6184 - val_accuracy: 0.1861
Epoch 28/100
45/45 [==============================] - ETA: 0s - loss: 1.1877 - accuracy: 0.5813
Epoch 28: val_accuracy did not improve from 0.26111
45/45 [==============================] - 6s 144ms/step - loss: 1.1877 - accuracy: 0.5813 - val_loss: 3.0485 - val_accuracy: 0.2500
Epoch 29/100
45/45 [==============================] - ETA: 0s - loss: 1.0968 - accuracy: 0.6132
Epoch 29: val_accuracy did not improve from 0.26111
45/45 [==============================] - 6s 143ms/step - loss: 1.0968 - accuracy: 0.6132 - val_loss: 61754.2734 - val_accuracy: 0.1917
Epoch 30/100
45/45 [==============================] - ETA: 0s - loss: 1.0537 - accuracy: 0.6424
Epoch 30: val_accuracy improved from 0.26111 to 0.26667, saving model to best_model.h5
45/45 [==============================] - 7s 148ms/step - loss: 1.0537 - accuracy: 0.6424 - val_loss: 2.3469 - val_accuracy: 0.2667
Epoch 31/100
45/45 [==============================] - ETA: 0s - loss: 1.0427 - accuracy: 0.6306
Epoch 31: val_accuracy did not improve from 0.26667
45/45 [==============================] - 6s 143ms/step - loss: 1.0427 - accuracy: 0.6306 - val_loss: 3.4498 - val_accuracy: 0.2250
Epoch 32/100
45/45 [==============================] - ETA: 0s - loss: 1.0697 - accuracy: 0.6403
Epoch 32: val_accuracy improved from 0.26667 to 0.37222, saving model to best_model.h5
45/45 [==============================] - 7s 146ms/step - loss: 1.0697 - accuracy: 0.6403 - val_loss: 2.8960 - val_accuracy: 0.3722
Epoch 33/100
45/45 [==============================] - ETA: 0s - loss: 0.9062 - accuracy: 0.6840
Epoch 33: val_accuracy did not improve from 0.37222
45/45 [==============================] - 6s 143ms/step - loss: 0.9062 - accuracy: 0.6840 - val_loss: 102.1351 - val_accuracy: 0.3028
Epoch 34/100
45/45 [==============================] - ETA: 0s - loss: 0.8220 - accuracy: 0.7118
Epoch 34: val_accuracy did not improve from 0.37222
45/45 [==============================] - 7s 144ms/step - loss: 0.8220 - accuracy: 0.7118 - val_loss: 3.1855 - val_accuracy: 0.2583
Epoch 35/100
45/45 [==============================] - ETA: 0s - loss: 0.7424 - accuracy: 0.7431
Epoch 35: val_accuracy did not improve from 0.37222
45/45 [==============================] - 7s 144ms/step - loss: 0.7424 - accuracy: 0.7431 - val_loss: 34309.0664 - val_accuracy: 0.3028
Epoch 36/100
45/45 [==============================] - ETA: 0s - loss: 0.7257 - accuracy: 0.7535
Epoch 36: val_accuracy did not improve from 0.37222
45/45 [==============================] - 6s 144ms/step - loss: 0.7257 - accuracy: 0.7535 - val_loss: 89.2148 - val_accuracy: 0.2361
Epoch 37/100
45/45 [==============================] - ETA: 0s - loss: 0.6695 - accuracy: 0.7799
Epoch 37: val_accuracy did not improve from 0.37222
45/45 [==============================] - 7s 146ms/step - loss: 0.6695 - accuracy: 0.7799 - val_loss: 3590.8940 - val_accuracy: 0.1889
Epoch 38/100
45/45 [==============================] - ETA: 0s - loss: 0.5841 - accuracy: 0.7917
Epoch 38: val_accuracy did not improve from 0.37222
45/45 [==============================] - 7s 145ms/step - loss: 0.5841 - accuracy: 0.7917 - val_loss: 5.1283 - val_accuracy: 0.2222
Epoch 39/100
45/45 [==============================] - ETA: 0s - loss: 0.5989 - accuracy: 0.7840
Epoch 39: val_accuracy did not improve from 0.37222
45/45 [==============================] - 7s 145ms/step - loss: 0.5989 - accuracy: 0.7840 - val_loss: 3.7647 - val_accuracy: 0.2833
Epoch 40/100
45/45 [==============================] - ETA: 0s - loss: 0.5431 - accuracy: 0.8181
Epoch 40: val_accuracy did not improve from 0.37222
45/45 [==============================] - 7s 144ms/step - loss: 0.5431 - accuracy: 0.8181 - val_loss: 3.9703 - val_accuracy: 0.3028
Epoch 41/100
45/45 [==============================] - ETA: 0s - loss: 0.4810 - accuracy: 0.8333
Epoch 41: val_accuracy improved from 0.37222 to 0.40278, saving model to best_model.h5
45/45 [==============================] - 7s 147ms/step - loss: 0.4810 - accuracy: 0.8333 - val_loss: 2.7934 - val_accuracy: 0.4028
Epoch 42/100
45/45 [==============================] - ETA: 0s - loss: 0.5016 - accuracy: 0.8278
Epoch 42: val_accuracy did not improve from 0.40278
45/45 [==============================] - 7s 145ms/step - loss: 0.5016 - accuracy: 0.8278 - val_loss: 58485.9453 - val_accuracy: 0.2583
Epoch 43/100
45/45 [==============================] - ETA: 0s - loss: 0.4782 - accuracy: 0.8424
Epoch 43: val_accuracy did not improve from 0.40278
45/45 [==============================] - 7s 144ms/step - loss: 0.4782 - accuracy: 0.8424 - val_loss: 3.6065 - val_accuracy: 0.3694
Epoch 44/100
45/45 [==============================] - ETA: 0s - loss: 0.3587 - accuracy: 0.8785
Epoch 44: val_accuracy did not improve from 0.40278
45/45 [==============================] - 7s 145ms/step - loss: 0.3587 - accuracy: 0.8785 - val_loss: 5.5882 - val_accuracy: 0.3806
Epoch 45/100
45/45 [==============================] - ETA: 0s - loss: 0.3143 - accuracy: 0.8889
Epoch 45: val_accuracy did not improve from 0.40278
45/45 [==============================] - 7s 145ms/step - loss: 0.3143 - accuracy: 0.8889 - val_loss: 2.7883 - val_accuracy: 0.3861
Epoch 46/100
45/45 [==============================] - ETA: 0s - loss: 0.3707 - accuracy: 0.8757
Epoch 46: val_accuracy did not improve from 0.40278
45/45 [==============================] - 7s 145ms/step - loss: 0.3707 - accuracy: 0.8757 - val_loss: 3.2097 - val_accuracy: 0.3583
Epoch 47/100
45/45 [==============================] - ETA: 0s - loss: 0.3418 - accuracy: 0.8799
Epoch 47: val_accuracy did not improve from 0.40278
45/45 [==============================] - 6s 144ms/step - loss: 0.3418 - accuracy: 0.8799 - val_loss: 3.1672 - val_accuracy: 0.4028
Epoch 48/100
45/45 [==============================] - ETA: 0s - loss: 0.3202 - accuracy: 0.8931
Epoch 48: val_accuracy did not improve from 0.40278
45/45 [==============================] - 7s 145ms/step - loss: 0.3202 - accuracy: 0.8931 - val_loss: 16.9275 - val_accuracy: 0.3944
Epoch 49/100
45/45 [==============================] - ETA: 0s - loss: 0.2668 - accuracy: 0.9118
Epoch 49: val_accuracy improved from 0.40278 to 0.41944, saving model to best_model.h5
45/45 [==============================] - 7s 147ms/step - loss: 0.2668 - accuracy: 0.9118 - val_loss: 2.8230 - val_accuracy: 0.4194
Epoch 50/100
45/45 [==============================] - ETA: 0s - loss: 0.2676 - accuracy: 0.9021
Epoch 50: val_accuracy did not improve from 0.41944
45/45 [==============================] - 7s 144ms/step - loss: 0.2676 - accuracy: 0.9021 - val_loss: 2671.1196 - val_accuracy: 0.3639
Epoch 51/100
45/45 [==============================] - ETA: 0s - loss: 0.2152 - accuracy: 0.9306
Epoch 51: val_accuracy improved from 0.41944 to 0.45556, saving model to best_model.h5
45/45 [==============================] - 7s 147ms/step - loss: 0.2152 - accuracy: 0.9306 - val_loss: 2.5370 - val_accuracy: 0.4556
Epoch 52/100
45/45 [==============================] - ETA: 0s - loss: 0.1308 - accuracy: 0.9611
Epoch 52: val_accuracy did not improve from 0.45556
45/45 [==============================] - 7s 144ms/step - loss: 0.1308 - accuracy: 0.9611 - val_loss: 2.9426 - val_accuracy: 0.4444
Epoch 53/100
45/45 [==============================] - ETA: 0s - loss: 0.1306 - accuracy: 0.9556
Epoch 53: val_accuracy did not improve from 0.45556
45/45 [==============================] - 7s 145ms/step - loss: 0.1306 - accuracy: 0.9556 - val_loss: 3.2494 - val_accuracy: 0.3917
Epoch 54/100
45/45 [==============================] - ETA: 0s - loss: 0.1515 - accuracy: 0.9500
Epoch 54: val_accuracy did not improve from 0.45556
45/45 [==============================] - 6s 144ms/step - loss: 0.1515 - accuracy: 0.9500 - val_loss: 4461.8813 - val_accuracy: 0.3611
Epoch 55/100
45/45 [==============================] - ETA: 0s - loss: 0.2079 - accuracy: 0.9285
Epoch 55: val_accuracy did not improve from 0.45556
45/45 [==============================] - 6s 144ms/step - loss: 0.2079 - accuracy: 0.9285 - val_loss: 4.7424 - val_accuracy: 0.3917
Epoch 56/100
45/45 [==============================] - ETA: 0s - loss: 0.2407 - accuracy: 0.9076
Epoch 56: val_accuracy did not improve from 0.45556
45/45 [==============================] - 7s 145ms/step - loss: 0.2407 - accuracy: 0.9076 - val_loss: 3.3555 - val_accuracy: 0.3889
Epoch 57/100
45/45 [==============================] - ETA: 0s - loss: 0.1948 - accuracy: 0.9333
Epoch 57: val_accuracy did not improve from 0.45556
45/45 [==============================] - 7s 145ms/step - loss: 0.1948 - accuracy: 0.9333 - val_loss: 3.4168 - val_accuracy: 0.3861
Epoch 58/100
45/45 [==============================] - ETA: 0s - loss: 0.1534 - accuracy: 0.9431
Epoch 58: val_accuracy improved from 0.45556 to 0.47222, saving model to best_model.h5
45/45 [==============================] - 7s 146ms/step - loss: 0.1534 - accuracy: 0.9431 - val_loss: 2.7895 - val_accuracy: 0.4722
Epoch 59/100
45/45 [==============================] - ETA: 0s - loss: 0.1457 - accuracy: 0.9549
Epoch 59: val_accuracy did not improve from 0.47222
45/45 [==============================] - 7s 144ms/step - loss: 0.1457 - accuracy: 0.9549 - val_loss: 6.3610 - val_accuracy: 0.3444
Epoch 60/100
45/45 [==============================] - ETA: 0s - loss: 0.2078 - accuracy: 0.9306
Epoch 60: val_accuracy did not improve from 0.47222
45/45 [==============================] - 7s 145ms/step - loss: 0.2078 - accuracy: 0.9306 - val_loss: 3.5834 - val_accuracy: 0.4056
Epoch 61/100
45/45 [==============================] - ETA: 0s - loss: 0.2005 - accuracy: 0.9361
Epoch 61: val_accuracy did not improve from 0.47222
45/45 [==============================] - 7s 144ms/step - loss: 0.2005 - accuracy: 0.9361 - val_loss: 4.0683 - val_accuracy: 0.3861
Epoch 62/100
45/45 [==============================] - ETA: 0s - loss: 0.1815 - accuracy: 0.9375
Epoch 62: val_accuracy did not improve from 0.47222
45/45 [==============================] - 7s 145ms/step - loss: 0.1815 - accuracy: 0.9375 - val_loss: 3.1445 - val_accuracy: 0.4611
Epoch 63/100
45/45 [==============================] - ETA: 0s - loss: 0.1027 - accuracy: 0.9722
Epoch 63: val_accuracy did not improve from 0.47222
45/45 [==============================] - 7s 145ms/step - loss: 0.1027 - accuracy: 0.9722 - val_loss: 3.0654 - val_accuracy: 0.4500
Epoch 64/100
45/45 [==============================] - ETA: 0s - loss: 0.1370 - accuracy: 0.9535
Epoch 64: val_accuracy did not improve from 0.47222
45/45 [==============================] - 7s 145ms/step - loss: 0.1370 - accuracy: 0.9535 - val_loss: 3.1589 - val_accuracy: 0.4667
Epoch 65/100
45/45 [==============================] - ETA: 0s - loss: 0.1530 - accuracy: 0.9576
Epoch 65: val_accuracy did not improve from 0.47222
45/45 [==============================] - 7s 145ms/step - loss: 0.1530 - accuracy: 0.9576 - val_loss: 19.4580 - val_accuracy: 0.3722
Epoch 66/100
45/45 [==============================] - ETA: 0s - loss: 0.1092 - accuracy: 0.9625
Epoch 66: val_accuracy did not improve from 0.47222
45/45 [==============================] - 6s 143ms/step - loss: 0.1092 - accuracy: 0.9625 - val_loss: 263474.1250 - val_accuracy: 0.2639
Epoch 67/100
45/45 [==============================] - ETA: 0s - loss: 0.1094 - accuracy: 0.9639
Epoch 67: val_accuracy did not improve from 0.47222
45/45 [==============================] - 7s 144ms/step - loss: 0.1094 - accuracy: 0.9639 - val_loss: 50495.4219 - val_accuracy: 0.4222
Epoch 68/100
45/45 [==============================] - ETA: 0s - loss: 0.0843 - accuracy: 0.9694
Epoch 68: val_accuracy improved from 0.47222 to 0.47500, saving model to best_model.h5
45/45 [==============================] - 7s 145ms/step - loss: 0.0843 - accuracy: 0.9694 - val_loss: 20.9734 - val_accuracy: 0.4750
Epoch 69/100
45/45 [==============================] - ETA: 0s - loss: 0.1767 - accuracy: 0.9458
Epoch 69: val_accuracy did not improve from 0.47500
45/45 [==============================] - 7s 145ms/step - loss: 0.1767 - accuracy: 0.9458 - val_loss: 1322.2261 - val_accuracy: 0.3583
Epoch 70/100
45/45 [==============================] - ETA: 0s - loss: 0.1305 - accuracy: 0.9479
Epoch 70: val_accuracy did not improve from 0.47500
45/45 [==============================] - 7s 144ms/step - loss: 0.1305 - accuracy: 0.9479 - val_loss: 4.3810 - val_accuracy: 0.3889
Epoch 71/100
45/45 [==============================] - ETA: 0s - loss: 0.1202 - accuracy: 0.9569
Epoch 71: val_accuracy did not improve from 0.47500
45/45 [==============================] - 7s 144ms/step - loss: 0.1202 - accuracy: 0.9569 - val_loss: 144.1233 - val_accuracy: 0.1361
Epoch 72/100
45/45 [==============================] - ETA: 0s - loss: 0.0746 - accuracy: 0.9785
Epoch 72: val_accuracy did not improve from 0.47500
45/45 [==============================] - 7s 144ms/step - loss: 0.0746 - accuracy: 0.9785 - val_loss: 3.0208 - val_accuracy: 0.4417
Epoch 73/100
45/45 [==============================] - ETA: 0s - loss: 0.1549 - accuracy: 0.9542
Epoch 73: val_accuracy did not improve from 0.47500
45/45 [==============================] - 7s 145ms/step - loss: 0.1549 - accuracy: 0.9542 - val_loss: 4.0066 - val_accuracy: 0.4333
Epoch 74/100
45/45 [==============================] - ETA: 0s - loss: 0.1743 - accuracy: 0.9444
Epoch 74: val_accuracy did not improve from 0.47500
45/45 [==============================] - 7s 145ms/step - loss: 0.1743 - accuracy: 0.9444 - val_loss: 373.7328 - val_accuracy: 0.4250
Epoch 75/100
45/45 [==============================] - ETA: 0s - loss: 0.1104 - accuracy: 0.9611
Epoch 75: val_accuracy did not improve from 0.47500
45/45 [==============================] - 7s 145ms/step - loss: 0.1104 - accuracy: 0.9611 - val_loss: 4.0707 - val_accuracy: 0.4222
Epoch 76/100
45/45 [==============================] - ETA: 0s - loss: 0.1021 - accuracy: 0.9639
Epoch 76: val_accuracy did not improve from 0.47500
45/45 [==============================] - 6s 144ms/step - loss: 0.1021 - accuracy: 0.9639 - val_loss: 4.0057 - val_accuracy: 0.3944
Epoch 77/100
45/45 [==============================] - ETA: 0s - loss: 0.1100 - accuracy: 0.9618
Epoch 77: val_accuracy did not improve from 0.47500
45/45 [==============================] - 6s 143ms/step - loss: 0.1100 - accuracy: 0.9618 - val_loss: 4.1805 - val_accuracy: 0.4389
Epoch 78/100
45/45 [==============================] - ETA: 0s - loss: 0.0505 - accuracy: 0.9847
Epoch 78: val_accuracy did not improve from 0.47500
45/45 [==============================] - 7s 145ms/step - loss: 0.0505 - accuracy: 0.9847 - val_loss: 3.7758 - val_accuracy: 0.4750
Epoch 78: early stopping

4、结果展示和预测

1、结果展示

acc = history.history['accuracy']
val_acc = history.history['val_accuracy']loss = history.history['loss']
val_loss = history.history['val_loss']epochs_range = range(len(loss))plt.figure(figsize=(12, 4))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()

在这里插入图片描述

  • loss这个没办法,我试了好几次,都会有那么一次会很大,我之前用pytorch没有遇到这种情况。这次训练最好的结果是:loss: 0.0505 - accuracy: 0.9847 - val_loss: 3.7758 - val_accuracy: 0.4750
  • 验证集准确率一直上不去,本人感觉如果结果ResNet会更好,这次本人感觉有很多原因,比如说图片训练少,模型结果计算量有点大了,可以结合ResNet进行优化,这个就后面在更新吧。

2、预测

from PIL import Image # 加载模型权重
model.load_weights('best_model.h5')# 加载图片
img = Image.open("./data/Brad Pitt/001_c04300ef.jpg")
image = tf.image.resize(img, [256, 256])img_array = tf.expand_dims(image, 0)  # 新插入一个元素predict = model.predict(img_array)
print("预测结果: ", classnames[np.argmax(predict)])
1/1 [==============================] - 0s 384ms/step
预测结果:  Brad Pitt

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.rhkb.cn/news/461769.html

如若内容造成侵权/违法违规/事实不符,请联系长河编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

SPA和SSR

单页面应用程序(SPA) 单页面应用(SPA)全称是:Single-page application, SPA应用是在客户端呈现的(术语称:CRS)。 SPA应用默认只返回一个空HTML页面&#xff0c;如:body只有<div id"app"></div>而整个应用程序的内容都是通过JavaScript动态加载&#xf…

【 纷享销客-注册安全分析报告-无验证方式导致安全隐患】

前言 由于网站注册入口容易被黑客攻击&#xff0c;存在如下安全问题&#xff1a; 1. 暴力破解密码&#xff0c;造成用户信息泄露 2. 短信盗刷的安全问题&#xff0c;影响业务及导致用户投诉 3. 带来经济损失&#xff0c;尤其是后付费客户&#xff0c;风险巨大&#xff0c;造…

基于SpringBoot和PostGIS的世界各国邻国可视化实践

目录 前言 一、空间数据查询基础 1、空间数据库基础 2、空间相邻查询 二、SpringBoot后台功能设计 1、后台查询接口的实现 2、业务接口设计 三、Leaflet进行WebGIS开发 1、整体结构介绍 2、相邻国家展示可视化 四、成果展示 1、印度及其邻国 2、乌克兰及其邻国 3、…

Python之groupby()及aggregate()方法

目录 数据准备df.describe()思考1 分组 pd.groupby()思考2 df.aggregate()思考1 现在有一份titanic_train.csv&#xff0c;包含泰坦尼克号乘客信息及获救情况的明细数据&#xff0c;我们需要使用一些聚合函数&#xff0c;统计相关指标。 数据准备 import pandas as pd df pd.…

Unity 二次元三渲二

三渲二 注意&#xff1a;Unity必须是2022.3LTS及以上和URP项目&#xff01;&#xff01;&#xff01; 下载三渲二插件 【如何将原神的角色导入Unity】全网最细致教程&#xff0c;全程干货。不使用任何收费插件&#xff0c;使用Spring Bone对头发和衣服进行物理模拟。_原神 步…

Unity计算二维向量夹角余弦值和正弦值的优化方法参考

如果不考虑优化问题&#xff0c;计算两个向量的余弦值或者正弦值可以直接使用类似的方法&#xff1a; [SerializeField] Vector2 v1, v2;void Start() {float valCos Mathf.Acos(Vector2.SignedAngle(v1, v2));float valSin Mathf.Asin(Vector2.SignedAngle(v1, v2)); } 但是…

深度|谁在为OpenAI和Anthropic的AI编程竞赛提供“军火”?已赚得盆满钵满

图片来源&#xff1a;Unsplash AI 开发者之所以一致认为编程的重要性&#xff0c;是有原因的&#xff1a;大型语言模型编程能力越强&#xff0c;它回答与软件无关的其他类型问题的能力也越强。 去年秋天&#xff0c;几位 Google 人工智能领导者与初创公司 CEO Jonathan Siddh…

2024年北京市安全员-A证证模拟考试题库及北京市安全员-A证理论考试试题

题库来源&#xff1a;安全生产模拟考试一点通公众号小程序 2024年北京市安全员-A证证模拟考试题库及北京市安全员-A证理论考试试题是由安全生产模拟考试一点通提供&#xff0c;北京市安全员-A证证模拟考试题库是根据北京市安全员-A证最新版教材&#xff0c;北京市安全员-A证大…

[ 问题解决篇 ] win11中本地组策略编辑器gpedit.msc打不开(gpedit.msc缺失)

&#x1f36c; 博主介绍 &#x1f468;‍&#x1f393; 博主介绍&#xff1a;大家好&#xff0c;我是 _PowerShell &#xff0c;很高兴认识大家~ ✨主攻领域&#xff1a;【渗透领域】【数据通信】 【通讯安全】 【web安全】【面试分析】 &#x1f389;点赞➕评论➕收藏 养成习…

前端聊天室页面开发(赛博朋克科技风,内含源码)

肝了一天&#xff0c;经过各种处理美化&#xff0c;肝出来了一个赛博朋克科技风的前端页面&#xff0c;用的原生三件套htmlcssjavascript开发的&#xff0c;本来想是加点功能调用一下gpt接口&#xff0c;但是基本都需要webscoket通信&#xff0c;可惜我js学的不是很深入&#x…

TMDOG的Gin学习笔记_01——初识Gin框架

TMDOG的Gin学习笔记_01——初识Gin框架 博客地址&#xff1a;[TMDOG的博客](https://blog.tmdog114514.icu) 作者自述&#xff1a; 停更太久了&#xff0c;是因为开学了课太多了&#xff0c;并且我一直在准备上篇文章的内容正在coding&#xff0c;就先搁置了更新博客QAQ&…

wsl2.0(windows linux子系统)使用流程

1.什么是wsl wsl指的是windows的linux子系统&#xff0c;最初是wsl1.0&#xff0c;靠windows内核来模拟linux内核&#xff0c;并不运行真正的linux内核&#xff0c;所以有时会有兼容性的问题。 而wsl2.0是基于windows自带的虚拟机功能hyper-v的&#xff0c;它会把设备上的每个…

计算机网络:网络层 —— IPv4 数据报的首部格式

文章目录 IPv4数据报的首部格式IPv4数据报分片生存时间 TTL字段协议字段首部检验和字段 IPv4数据报的首部格式 IPv4 数据报的首部格式及其内容是实现 IPv4 协议各种功能的基础。 在 TCP/IP 标准中&#xff0c;各种数据格式常常以32比特(即4字节)为单位来描述 固定部分&#x…

vue3学习记录-nextTick

vue3学习记录-nextTick 1. 案例场景2. 使用方法2.1 回调方式2.2 async&#xff0c;await 3.原理 1. 案例场景 聊天框实现输入内容&#xff0c;滚动条默认滚到最底部。 <template><div class"chat_box"><div class"chat_list" ref"chat…

Facebook群控策略详解

Facebook群控早在前几年就很火爆了&#xff0c;对于做Facebook营销或者电商的跨境选手来说&#xff0c;这是个不错的提高效率扩大增长的办法。具体来说&#xff0c;Facebook群控是一种通过同时管理多个Facebook账户进行自动化推广活动的方法&#xff0c;它可以实现自动发布帖子…

【私聊记录】最近在忙什么啊?听说你在学人工智能?

小舒&#xff1a;哎&#xff0c;你最近在忙什么啊&#xff1f; 小元&#xff1a;我在学习人工智能呢。 小舒&#xff1a;人工智能&#xff1f;难不难学啊&#xff1f; 小元&#xff1a;不难&#xff0c;找到正确的学习姿势就不难了&#xff01; 小舒&#xff1a;那你为什么想学…

BLE 协议之 L2CAP

目录 一、简介二、L2CAP Protocol 架构1、逻辑信道划分2、信道模式3、设计思想4、帧结构4.1 面向连接信道 B-frame4.2 无连接数据信道包 G-frame4.3 重传/流量控制/流传输模式下的面向连接的信道 S-frame、I-frame4.4 面向连接的通道分为 LE 信用流控模式和增强型信用流控模式 …

『 Linux 』网络传输层 - TCP(二)

文章目录 TCP六个标志位TCP的连接三次握手 四次挥手为什么是三次握手和四次挥手 重传机制 TCP六个标志位 在TCP协议报文的报头中存在一个用于标志TCP报文类型的标志位(不考虑保留标志位),这些标志位以比特位选项的方式存在,即对应标志位为0则表示为假,对应标志位为1则为真; SYN…

安科瑞AMB400分布式光纤测温系统解决方案--远程监控、预警,预防电气火灾

安科瑞戴婷 可找我Acrel-Fanny 安科瑞AMB400电缆分布式光纤测温具有多方面的特点和优势&#xff1a; 工作原理&#xff1a; 基于拉曼散射效应。激光器产生大功率的光脉冲&#xff0c;光在光纤中传播时会产生散射。携带有温度信息的拉曼散射光返回光路耦合器&#xff0c;耦…

Raspberry Pi 树莓派产品系列说明

系列文章目录 前言 随着我们产品线的不断扩展&#xff0c;要了解所有不同的 Raspberry Pi 板可能会让人感到困惑。以下是 Raspberry Pi 型号的高级分类&#xff0c;包括我们的旗舰系列、Zero 系列、计算模块系列和 Pico 微控制器。 Raspberry Pi 电脑分为几个不同的系列&#x…