深度学习Week17——优化器对比实验

文章目录
深度学习Week17——优化器对比实验
一、前言
二、我的环境
三、前期工作
1、配置环境
2、导入数据
2.1 加载数据
2.2 检查数据
2.3 配置数据集
2.4 数据可视化
四、构建模型
五、训练模型
1、将其嵌入model中
2、在Dataset数据集中进行数据增强
六、模型评估
1、Accuracy与Loss图
2、模型评估

一、前言

  • 🍨 本文为🔗365天深度学习训练营 中的学习记录博客
  • 🍖 原作者:K同学啊 | 接辅导、项目定制

终于,基础篇打卡已经基本结束,深度学习的基础学习到了很多,相关函数已经可以比较熟练的使用,最后一篇主要是探究不同优化器、以及不同参数配置对模型的影响,在论文当中我们也可以进行优化器的比对,以增加论文工作量。

我记得在今年4月份,我和学长一同发表了一篇EI会议论文,里面关于优化器的对比就占了一定的篇幅,那么今天,通过学习,我一定也会自己清楚如何弄清楚优化器对比,本次学习由于临近期末周,仅做了基础学习,没有过多扩展,未来会抽一周补全。

二、我的环境

  • 电脑系统:Windows 10
  • 语言环境:Python 3.8.0
  • 编译器:Pycharm2023.2.3
    深度学习环境:TensorFlow
    显卡及显存:RTX 3060 8G

三、前期工作

1、配置环境

import tensorflow as tf
gpus = tf.config.list_physical_devices("GPU")if gpus:gpu0 = gpus[0] #如果有多个GPU,仅使用第0个GPUtf.config.experimental.set_memory_growth(gpu0, True) #设置GPU显存用量按需使用tf.config.set_visible_devices([gpu0],"GPU")from tensorflow          import keras
import matplotlib.pyplot as plt
import pandas            as pd
import numpy             as np
import warnings,os,PIL,pathlibwarnings.filterwarnings("ignore")             #忽略警告信息
plt.rcParams['font.sans-serif']    = ['SimHei']  # 用来正常显示中文标签
plt.rcParams['axes.unicode_minus'] = False    # 用来正常显示负号

2、 导入数据

导入所有猫狗图片数据,依次分别为训练集图片(train_images)、训练集标签(train_labels)、测试集图片(test_images)、测试集标签(test_labels),数据集来源于K同学啊

2.1 加载数据
data_dir   = "/home/mw/input/dogcat3675/365-7-data"data_dir    = pathlib.Path(data_dir)
image_count = len(list(data_dir.glob('*/*')))
print("图片总数为:",image_count)
图片总数为: 3400
img_height = 256
img_width  = 256
batch_size = 32
"""
关于image_dataset_from_directory()的详细介绍可以参考文章:https://mtyjkh.blog.csdn.net/article/details/117018789
"""
train_ds = tf.keras.preprocessing.image_dataset_from_directory(data_dir,validation_split=0.2,subset="training",seed=12,image_size=(img_height, img_width),batch_size=batch_size)

输出:

Found 3400 files belonging to 2 classes.
Using 2720 files for training.
"""
关于image_dataset_from_directory()的详细介绍可以参考文章:https://mtyjkh.blog.csdn.net/article/details/117018789
"""
val_ds = tf.keras.preprocessing.image_dataset_from_directory(data_dir,validation_split=0.2,subset="validation",seed=12,image_size=(img_height, img_width),batch_size=batch_size)
Found 3400 files belonging to 2 classes.
Using 680 files for validation.

我们可以通过class_names输出数据集的标签。标签将按字母顺序对应于目录名称。

class_names = train_ds.class_names
print(class_names)

[‘cat’, ‘dog’]

2.2 检查数据
for image_batch, labels_batch in train_ds:print(image_batch.shape)print(labels_batch.shape)break
(32, 256, 256, 3)
(32,)
2.3 配置数据集
AUTOTUNE = tf.data.AUTOTUNEdef train_preprocessing(image,label):return (image/255.0,label)train_ds = (train_ds.cache().shuffle(1000).map(train_preprocessing)    # 这里可以设置预处理函数
#     .batch(batch_size)           # 在image_dataset_from_directory处已经设置了batch_size.prefetch(buffer_size=AUTOTUNE)
)val_ds = (val_ds.cache().shuffle(1000).map(train_preprocessing)    # 这里可以设置预处理函数
#     .batch(batch_size)         # 在image_dataset_from_directory处已经设置了batch_size.prefetch(buffer_size=AUTOTUNE)
)
2.4 数据可视化
plt.figure(figsize=(10, 8))  # 图形的宽为10高为5
plt.suptitle("数据展示")for images, labels in train_ds.take(1):for i in range(15):plt.subplot(4, 5, i + 1)plt.xticks([])plt.yticks([])plt.grid(False)# 显示图片plt.imshow(images[i])# 显示标签plt.xlabel(class_names[labels[i]])plt.show()

在这里插入图片描述

四 、构建模型

from tensorflow.keras.layers import Dropout,Dense,BatchNormalization
from tensorflow.keras.models import Modeldef create_model(optimizer='adam'):# 加载预训练模型vgg16_base_model = tf.keras.applications.vgg16.VGG16(weights='imagenet',include_top=False,input_shape=(img_width, img_height, 3),pooling='avg')for layer in vgg16_base_model.layers:layer.trainable = FalseX = vgg16_base_model.outputX = Dense(170, activation='relu')(X)X = BatchNormalization()(X)X = Dropout(0.5)(X)output = Dense(len(class_names), activation='softmax')(X)vgg16_model = Model(inputs=vgg16_base_model.input, outputs=output)vgg16_model.compile(optimizer=optimizer,loss='sparse_categorical_crossentropy',metrics=['accuracy'])return vgg16_modelmodel1 = create_model(optimizer=tf.keras.optimizers.Adam())
model2 = create_model(optimizer=tf.keras.optimizers.SGD())
model2.summary()
Downloading data from https://storage.googleapis.com/tensorflow/keras-applications/vgg16/vgg16_weights_tf_dim_ordering_tf_kernels_notop.h5
58892288/58889256 [==============================] - 3s 0us/step
Model: "model_1"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_2 (InputLayer)         [(None, 256, 256, 3)]     0         
_________________________________________________________________
block1_conv1 (Conv2D)        (None, 256, 256, 64)      1792      
_________________________________________________________________
block1_conv2 (Conv2D)        (None, 256, 256, 64)      36928     
_________________________________________________________________
block1_pool (MaxPooling2D)   (None, 128, 128, 64)      0         
_________________________________________________________________
block2_conv1 (Conv2D)        (None, 128, 128, 128)     73856     
_________________________________________________________________
block2_conv2 (Conv2D)        (None, 128, 128, 128)     147584    
_________________________________________________________________
block2_pool (MaxPooling2D)   (None, 64, 64, 128)       0         
_________________________________________________________________
block3_conv1 (Conv2D)        (None, 64, 64, 256)       295168    
_________________________________________________________________
block3_conv2 (Conv2D)        (None, 64, 64, 256)       590080    
_________________________________________________________________
block3_conv3 (Conv2D)        (None, 64, 64, 256)       590080    
_________________________________________________________________
block3_pool (MaxPooling2D)   (None, 32, 32, 256)       0         
_________________________________________________________________
block4_conv1 (Conv2D)        (None, 32, 32, 512)       1180160   
_________________________________________________________________
block4_conv2 (Conv2D)        (None, 32, 32, 512)       2359808   
_________________________________________________________________
block4_conv3 (Conv2D)        (None, 32, 32, 512)       2359808   
_________________________________________________________________
block4_pool (MaxPooling2D)   (None, 16, 16, 512)       0         
_________________________________________________________________
block5_conv1 (Conv2D)        (None, 16, 16, 512)       2359808   
_________________________________________________________________
block5_conv2 (Conv2D)        (None, 16, 16, 512)       2359808   
_________________________________________________________________
block5_conv3 (Conv2D)        (None, 16, 16, 512)       2359808   
_________________________________________________________________
block5_pool (MaxPooling2D)   (None, 8, 8, 512)         0         
_________________________________________________________________
global_average_pooling2d_1 ( (None, 512)               0         
_________________________________________________________________
dense_2 (Dense)              (None, 170)               87210     
_________________________________________________________________
batch_normalization_1 (Batch (None, 170)               680       
_________________________________________________________________
dropout_1 (Dropout)          (None, 170)               0         
_________________________________________________________________
dense_3 (Dense)              (None, 2)                 342       
=================================================================
Total params: 14,802,920
Trainable params: 87,892
Non-trainable params: 14,715,028
_________________________________________________________________

五、训练模型

NO_EPOCHS = 50history_model1  = model1.fit(train_ds, epochs=NO_EPOCHS, verbose=1, validation_data=val_ds)
history_model2  = model2.fit(train_ds, epochs=NO_EPOCHS, verbose=1, validation_data=val_ds)
Epoch 1/50
85/85 [==============================] - 40s 230ms/step - loss: 0.2183 - accuracy: 0.8948 - val_loss: 0.1916 - val_accuracy: 0.9912
Epoch 2/50
85/85 [==============================] - 15s 182ms/step - loss: 0.0372 - accuracy: 0.9901 - val_loss: 0.0799 - val_accuracy: 0.9956
Epoch 3/50
85/85 [==============================] - 16s 184ms/step - loss: 0.0213 - accuracy: 0.9941 - val_loss: 0.0456 - val_accuracy: 0.9897
Epoch 4/50
85/85 [==============================] - 16s 185ms/step - loss: 0.0191 - accuracy: 0.9916 - val_loss: 0.0220 - val_accuracy: 0.9941
Epoch 5/50
85/85 [==============================] - 16s 187ms/step - loss: 0.0177 - accuracy: 0.9938 - val_loss: 0.0319 - val_accuracy: 0.9868
Epoch 6/50
85/85 [==============================] - 16s 188ms/step - loss: 0.0135 - accuracy: 0.9952 - val_loss: 0.0139 - val_accuracy: 0.9941
Epoch 7/50
85/85 [==============================] - 16s 189ms/step - loss: 0.0108 - accuracy: 0.9965 - val_loss: 0.0343 - val_accuracy: 0.9868
Epoch 8/50
85/85 [==============================] - 16s 190ms/step - loss: 0.0088 - accuracy: 0.9979 - val_loss: 0.0919 - val_accuracy: 0.9676
Epoch 9/50
85/85 [==============================] - 16s 191ms/step - loss: 0.0141 - accuracy: 0.9958 - val_loss: 0.0476 - val_accuracy: 0.9824
Epoch 10/50
85/85 [==============================] - 16s 192ms/step - loss: 0.0093 - accuracy: 0.9985 - val_loss: 0.0067 - val_accuracy: 0.9985
Epoch 11/50
85/85 [==============================] - 16s 193ms/step - loss: 0.0055 - accuracy: 0.9995 - val_loss: 0.0160 - val_accuracy: 0.9941
Epoch 12/50
85/85 [==============================] - 16s 194ms/step - loss: 0.0085 - accuracy: 0.9986 - val_loss: 0.0076 - val_accuracy: 0.9956
Epoch 13/50
85/85 [==============================] - 17s 195ms/step - loss: 0.0073 - accuracy: 0.9982 - val_loss: 0.0131 - val_accuracy: 0.9971
Epoch 14/50
85/85 [==============================] - 17s 195ms/step - loss: 0.0060 - accuracy: 0.9999 - val_loss: 0.0126 - val_accuracy: 0.9941
Epoch 15/50
85/85 [==============================] - 17s 196ms/step - loss: 0.0031 - accuracy: 0.9993 - val_loss: 0.0053 - val_accuracy: 0.9985
Epoch 16/50
85/85 [==============================] - 17s 196ms/step - loss: 0.0031 - accuracy: 0.9999 - val_loss: 0.0108 - val_accuracy: 0.9941
Epoch 17/50
85/85 [==============================] - 17s 196ms/step - loss: 0.0038 - accuracy: 0.9989 - val_loss: 0.0054 - val_accuracy: 0.9985
Epoch 18/50
85/85 [==============================] - 17s 195ms/step - loss: 0.0014 - accuracy: 1.0000 - val_loss: 0.0048 - val_accuracy: 0.9985
Epoch 19/50
85/85 [==============================] - 17s 195ms/step - loss: 0.0017 - accuracy: 1.0000 - val_loss: 0.0060 - val_accuracy: 0.9985
Epoch 20/50
85/85 [==============================] - 17s 195ms/step - loss: 0.0023 - accuracy: 0.9988 - val_loss: 0.0045 - val_accuracy: 0.9985
Epoch 21/50
85/85 [==============================] - 16s 194ms/step - loss: 0.0066 - accuracy: 0.9980 - val_loss: 0.0048 - val_accuracy: 1.0000
Epoch 22/50
85/85 [==============================] - 16s 194ms/step - loss: 0.0019 - accuracy: 0.9999 - val_loss: 0.0212 - val_accuracy: 0.9941
Epoch 23/50
85/85 [==============================] - 16s 193ms/step - loss: 0.0038 - accuracy: 0.9990 - val_loss: 0.0049 - val_accuracy: 0.9985
Epoch 24/50
85/85 [==============================] - 16s 193ms/step - loss: 0.0037 - accuracy: 0.9987 - val_loss: 0.0054 - val_accuracy: 0.9985
Epoch 25/50
85/85 [==============================] - 16s 193ms/step - loss: 0.0017 - accuracy: 1.0000 - val_loss: 0.0839 - val_accuracy: 0.9721
Epoch 26/50
85/85 [==============================] - 16s 193ms/step - loss: 0.0037 - accuracy: 0.9983 - val_loss: 0.0084 - val_accuracy: 0.9985
Epoch 27/50
85/85 [==============================] - 16s 193ms/step - loss: 0.0022 - accuracy: 1.0000 - val_loss: 0.0078 - val_accuracy: 0.9985
Epoch 28/50
85/85 [==============================] - 16s 193ms/step - loss: 0.0030 - accuracy: 0.9992 - val_loss: 0.0044 - val_accuracy: 0.9985
Epoch 29/50
85/85 [==============================] - 16s 193ms/step - loss: 0.0011 - accuracy: 0.9999 - val_loss: 0.0047 - val_accuracy: 0.9971
Epoch 30/50
85/85 [==============================] - 16s 193ms/step - loss: 0.0023 - accuracy: 1.0000 - val_loss: 0.0082 - val_accuracy: 0.9985
Epoch 31/50
85/85 [==============================] - 16s 193ms/step - loss: 0.0015 - accuracy: 0.9998 - val_loss: 0.0099 - val_accuracy: 0.9956
Epoch 32/50
85/85 [==============================] - 16s 193ms/step - loss: 0.0014 - accuracy: 1.0000 - val_loss: 0.0058 - val_accuracy: 0.9985
Epoch 33/50
85/85 [==============================] - 16s 193ms/step - loss: 0.0012 - accuracy: 1.0000 - val_loss: 0.0067 - val_accuracy: 0.9985
Epoch 34/50
85/85 [==============================] - 16s 193ms/step - loss: 9.7591e-04 - accuracy: 1.0000 - val_loss: 0.0058 - val_accuracy: 0.9985
Epoch 35/50
85/85 [==============================] - 16s 193ms/step - loss: 6.5250e-04 - accuracy: 1.0000 - val_loss: 0.0106 - val_accuracy: 0.9971
Epoch 36/50
85/85 [==============================] - 16s 193ms/step - loss: 0.0016 - accuracy: 0.9997 - val_loss: 0.0140 - val_accuracy: 0.9926
Epoch 37/50
85/85 [==============================] - 16s 192ms/step - loss: 0.0070 - accuracy: 0.9960 - val_loss: 0.0359 - val_accuracy: 0.9868
Epoch 38/50
85/85 [==============================] - 16s 192ms/step - loss: 0.0023 - accuracy: 0.9996 - val_loss: 0.0309 - val_accuracy: 0.9897
Epoch 39/50
85/85 [==============================] - 16s 193ms/step - loss: 0.0023 - accuracy: 0.9991 - val_loss: 0.0062 - val_accuracy: 0.9985
Epoch 40/50
85/85 [==============================] - 16s 193ms/step - loss: 0.0014 - accuracy: 0.9999 - val_loss: 0.0133 - val_accuracy: 0.9956
Epoch 41/50
85/85 [==============================] - 16s 193ms/step - loss: 0.0028 - accuracy: 0.9986 - val_loss: 0.1411 - val_accuracy: 0.9603
Epoch 42/50
85/85 [==============================] - 16s 193ms/step - loss: 0.0036 - accuracy: 0.9991 - val_loss: 0.0233 - val_accuracy: 0.9941
Epoch 43/50
85/85 [==============================] - 16s 192ms/step - loss: 0.0025 - accuracy: 0.9995 - val_loss: 0.0018 - val_accuracy: 1.0000
Epoch 44/50
85/85 [==============================] - 16s 193ms/step - loss: 0.0015 - accuracy: 1.0000 - val_loss: 0.0043 - val_accuracy: 0.9985
Epoch 45/50
85/85 [==============================] - 16s 192ms/step - loss: 0.0040 - accuracy: 0.9988 - val_loss: 0.0049 - val_accuracy: 0.9985
Epoch 46/50
85/85 [==============================] - 16s 193ms/step - loss: 0.0011 - accuracy: 1.0000 - val_loss: 0.0034 - val_accuracy: 0.9985
Epoch 47/50
85/85 [==============================] - 16s 193ms/step - loss: 0.0012 - accuracy: 1.0000 - val_loss: 0.0055 - val_accuracy: 0.9971
Epoch 48/50
85/85 [==============================] - 16s 193ms/step - loss: 0.0023 - accuracy: 0.9989 - val_loss: 0.0639 - val_accuracy: 0.9824
Epoch 49/50
85/85 [==============================] - 16s 193ms/step - loss: 8.3466e-04 - accuracy: 1.0000 - val_loss: 0.0049 - val_accuracy: 0.9985
Epoch 50/50
85/85 [==============================] - 16s 193ms/step - loss: 0.0021 - accuracy: 0.9993 - val_loss: 0.0163 - val_accuracy: 0.9926
Epoch 1/50
85/85 [==============================] - 17s 194ms/step - loss: 0.3235 - accuracy: 0.8649 - val_loss: 0.4082 - val_accuracy: 0.8088
Epoch 2/50
85/85 [==============================] - 16s 192ms/step - loss: 0.0971 - accuracy: 0.9641 - val_loss: 0.2085 - val_accuracy: 0.9882
Epoch 3/50
85/85 [==============================] - 16s 193ms/step - loss: 0.0705 - accuracy: 0.9773 - val_loss: 0.1053 - val_accuracy: 0.9926
Epoch 4/50
85/85 [==============================] - 16s 193ms/step - loss: 0.0675 - accuracy: 0.9780 - val_loss: 0.0565 - val_accuracy: 0.9926
Epoch 5/50
85/85 [==============================] - 16s 193ms/step - loss: 0.0510 - accuracy: 0.9841 - val_loss: 0.0317 - val_accuracy: 0.9941
Epoch 6/50
85/85 [==============================] - 16s 192ms/step - loss: 0.0466 - accuracy: 0.9802 - val_loss: 0.0229 - val_accuracy: 0.9956
Epoch 7/50
85/85 [==============================] - 16s 192ms/step - loss: 0.0424 - accuracy: 0.9869 - val_loss: 0.0160 - val_accuracy: 0.9985
Epoch 8/50
85/85 [==============================] - 16s 192ms/step - loss: 0.0488 - accuracy: 0.9843 - val_loss: 0.0152 - val_accuracy: 0.9956
Epoch 9/50
85/85 [==============================] - 16s 192ms/step - loss: 0.0405 - accuracy: 0.9892 - val_loss: 0.0134 - val_accuracy: 0.9971
Epoch 10/50
85/85 [==============================] - 16s 193ms/step - loss: 0.0398 - accuracy: 0.9875 - val_loss: 0.0128 - val_accuracy: 0.9956
Epoch 11/50
85/85 [==============================] - 16s 193ms/step - loss: 0.0387 - accuracy: 0.9856 - val_loss: 0.0139 - val_accuracy: 0.9956
Epoch 12/50
85/85 [==============================] - 16s 192ms/step - loss: 0.0334 - accuracy: 0.9900 - val_loss: 0.0155 - val_accuracy: 0.9956
Epoch 13/50
85/85 [==============================] - 16s 192ms/step - loss: 0.0321 - accuracy: 0.9915 - val_loss: 0.0119 - val_accuracy: 0.9956
Epoch 14/50
85/85 [==============================] - 16s 192ms/step - loss: 0.0358 - accuracy: 0.9878 - val_loss: 0.0116 - val_accuracy: 0.9971
Epoch 15/50
85/85 [==============================] - 16s 193ms/step - loss: 0.0270 - accuracy: 0.9898 - val_loss: 0.0098 - val_accuracy: 0.9985
Epoch 16/50
85/85 [==============================] - 16s 192ms/step - loss: 0.0233 - accuracy: 0.9936 - val_loss: 0.0102 - val_accuracy: 0.9956
Epoch 17/50
85/85 [==============================] - 16s 192ms/step - loss: 0.0274 - accuracy: 0.9915 - val_loss: 0.0106 - val_accuracy: 0.9971
Epoch 18/50
85/85 [==============================] - 16s 192ms/step - loss: 0.0233 - accuracy: 0.9942 - val_loss: 0.0090 - val_accuracy: 0.9971
Epoch 19/50
85/85 [==============================] - 16s 193ms/step - loss: 0.0284 - accuracy: 0.9894 - val_loss: 0.0111 - val_accuracy: 0.9971
Epoch 20/50
85/85 [==============================] - 16s 193ms/step - loss: 0.0232 - accuracy: 0.9934 - val_loss: 0.0098 - val_accuracy: 0.9956
Epoch 21/50
85/85 [==============================] - 16s 192ms/step - loss: 0.0262 - accuracy: 0.9928 - val_loss: 0.0108 - val_accuracy: 0.9941
Epoch 22/50
85/85 [==============================] - 16s 193ms/step - loss: 0.0239 - accuracy: 0.9940 - val_loss: 0.0112 - val_accuracy: 0.9941
Epoch 23/50
85/85 [==============================] - 16s 193ms/step - loss: 0.0251 - accuracy: 0.9927 - val_loss: 0.0089 - val_accuracy: 0.9956
Epoch 24/50
85/85 [==============================] - 16s 193ms/step - loss: 0.0251 - accuracy: 0.9910 - val_loss: 0.0086 - val_accuracy: 0.9956
Epoch 25/50
85/85 [==============================] - 16s 192ms/step - loss: 0.0254 - accuracy: 0.9904 - val_loss: 0.0102 - val_accuracy: 0.9956
Epoch 26/50
85/85 [==============================] - 16s 192ms/step - loss: 0.0245 - accuracy: 0.9916 - val_loss: 0.0084 - val_accuracy: 0.9985
Epoch 27/50
85/85 [==============================] - 16s 193ms/step - loss: 0.0226 - accuracy: 0.9932 - val_loss: 0.0086 - val_accuracy: 0.9985
Epoch 28/50
85/85 [==============================] - 16s 192ms/step - loss: 0.0289 - accuracy: 0.9905 - val_loss: 0.0088 - val_accuracy: 0.9971
Epoch 29/50
85/85 [==============================] - 16s 192ms/step - loss: 0.0219 - accuracy: 0.9928 - val_loss: 0.0091 - val_accuracy: 0.9956
Epoch 30/50
85/85 [==============================] - 16s 193ms/step - loss: 0.0215 - accuracy: 0.9943 - val_loss: 0.0077 - val_accuracy: 0.9985
Epoch 31/50
85/85 [==============================] - 16s 192ms/step - loss: 0.0178 - accuracy: 0.9952 - val_loss: 0.0074 - val_accuracy: 0.9971
Epoch 32/50
85/85 [==============================] - 16s 193ms/step - loss: 0.0231 - accuracy: 0.9929 - val_loss: 0.0117 - val_accuracy: 0.9956
Epoch 33/50
85/85 [==============================] - 16s 192ms/step - loss: 0.0183 - accuracy: 0.9962 - val_loss: 0.0078 - val_accuracy: 0.9971
Epoch 34/50
85/85 [==============================] - 16s 192ms/step - loss: 0.0186 - accuracy: 0.9950 - val_loss: 0.0076 - val_accuracy: 0.9971
Epoch 35/50
85/85 [==============================] - 16s 192ms/step - loss: 0.0185 - accuracy: 0.9949 - val_loss: 0.0089 - val_accuracy: 0.9956
Epoch 36/50
85/85 [==============================] - 16s 192ms/step - loss: 0.0199 - accuracy: 0.9924 - val_loss: 0.0080 - val_accuracy: 0.9971
Epoch 37/50
85/85 [==============================] - 16s 192ms/step - loss: 0.0165 - accuracy: 0.9960 - val_loss: 0.0076 - val_accuracy: 0.9971
Epoch 38/50
85/85 [==============================] - 16s 193ms/step - loss: 0.0180 - accuracy: 0.9945 - val_loss: 0.0083 - val_accuracy: 0.9985
Epoch 39/50
85/85 [==============================] - 16s 192ms/step - loss: 0.0219 - accuracy: 0.9904 - val_loss: 0.0087 - val_accuracy: 0.9971
Epoch 40/50
85/85 [==============================] - 16s 193ms/step - loss: 0.0179 - accuracy: 0.9971 - val_loss: 0.0086 - val_accuracy: 0.9956
Epoch 41/50
85/85 [==============================] - 16s 192ms/step - loss: 0.0166 - accuracy: 0.9937 - val_loss: 0.0076 - val_accuracy: 0.9971
Epoch 42/50
85/85 [==============================] - 16s 192ms/step - loss: 0.0159 - accuracy: 0.9946 - val_loss: 0.0074 - val_accuracy: 0.9985
Epoch 43/50
85/85 [==============================] - 16s 192ms/step - loss: 0.0161 - accuracy: 0.9942 - val_loss: 0.0079 - val_accuracy: 0.9971
Epoch 44/50
85/85 [==============================] - 16s 192ms/step - loss: 0.0173 - accuracy: 0.9957 - val_loss: 0.0073 - val_accuracy: 0.9971
Epoch 45/50
85/85 [==============================] - 16s 192ms/step - loss: 0.0149 - accuracy: 0.9965 - val_loss: 0.0069 - val_accuracy: 0.9985
Epoch 46/50
85/85 [==============================] - 16s 192ms/step - loss: 0.0157 - accuracy: 0.9963 - val_loss: 0.0076 - val_accuracy: 0.9971
Epoch 47/50
85/85 [==============================] - 16s 192ms/step - loss: 0.0153 - accuracy: 0.9959 - val_loss: 0.0074 - val_accuracy: 0.9971
Epoch 48/50
85/85 [==============================] - 16s 192ms/step - loss: 0.0162 - accuracy: 0.9957 - val_loss: 0.0068 - val_accuracy: 0.9985
Epoch 49/50
85/85 [==============================] - 16s 193ms/step - loss: 0.0132 - accuracy: 0.9955 - val_loss: 0.0079 - val_accuracy: 0.9971
Epoch 50/50
85/85 [==============================] - 16s 192ms/step - loss: 0.0131 - accuracy: 0.9954 - val_loss: 0.0085 - val_accuracy: 0.9971

六、评估模型

1、Accuracy与Loss图

from matplotlib.ticker import MultipleLocator
plt.rcParams['savefig.dpi'] = 300 #图片像素
plt.rcParams['figure.dpi']  = 300 #分辨率acc1     = history_model1.history['accuracy']
acc2     = history_model2.history['accuracy']
val_acc1 = history_model1.history['val_accuracy']
val_acc2 = history_model2.history['val_accuracy']loss1     = history_model1.history['loss']
loss2     = history_model2.history['loss']
val_loss1 = history_model1.history['val_loss']
val_loss2 = history_model2.history['val_loss']epochs_range = range(len(acc1))plt.figure(figsize=(16, 4))
plt.subplot(1, 2, 1)plt.plot(epochs_range, acc1, label='Training Accuracy-Adam')
plt.plot(epochs_range, acc2, label='Training Accuracy-SGD')
plt.plot(epochs_range, val_acc1, label='Validation Accuracy-Adam')
plt.plot(epochs_range, val_acc2, label='Validation Accuracy-SGD')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
# 设置刻度间隔,x轴每1一个刻度
ax = plt.gca()
ax.xaxis.set_major_locator(MultipleLocator(1))plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss1, label='Training Loss-Adam')
plt.plot(epochs_range, loss2, label='Training Loss-SGD')
plt.plot(epochs_range, val_loss1, label='Validation Loss-Adam')
plt.plot(epochs_range, val_loss2, label='Validation Loss-SGD')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')# 设置刻度间隔,x轴每1一个刻度
ax = plt.gca()
ax.xaxis.set_major_locator(MultipleLocator(1))plt.show()

在这里插入图片描述

2、模型评估

def test_accuracy_report(model):score = model.evaluate(val_ds, verbose=0)print('Loss function: %s, accuracy:' % score[0], score[1])test_accuracy_report(model2)
Loss function: 0.008488086983561516, accuracy: 0.9970588088035583

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.rhkb.cn/news/355903.html

如若内容造成侵权/违法违规/事实不符,请联系长河编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

让我来告诉初学者到底什么叫嵌入式系统?

在开始前刚好我有一些资料,是我根据网友给的问题精心整理了一份「嵌入式的资料从专业入门到高级教程」, 点个关注在评论区回复“888”之后私信回复“888”,全部无偿共享给大家!!!我们在刚刚开始学习电子学…

噪声-降噪引脚如何提高系统性能

由于LDO是电子器件,因此它们会自行产生一定量的噪声。选择低噪声LDO并采取措施来降低内部噪声对于生成不会影响系统性能的清洁电源轨而言不可或缺。 识别噪声 理想的 LDO 会生成没有交流元件的电压轨。遗憾的是,LDO 会像其他电子器件一样自行产生噪声。…

Java数据类型与运算符

1. 变量和类型 变量指的是程序运行时可变的量,相当于开辟一块空间来保存一些数据。 类型则是对变量的种类进行了划分,不同类型的变量具有不同的特性。 1.1 整型变量(重点) 基本语法格式: int 变量名 初始值;代码示…

大语言模型-Transformer

目录 1.概述 2.作用 3.诞生背景 4.历史版本 5.优缺点 5.1.优点 5.2.缺点 6.如何使用 7.应用场景 7.1.十大应用场景 7.2.聊天机器人 8.Python示例 9.总结 1.概述 大语言模型-Transformer是一种基于自注意力机制(self-attention)的深度学习…

开发中遇到的错误 - @SpringBootTest 注解爆红

我在使用 SpringBootTest 注解的时候爆红了&#xff0c;ait 回车也导不了包&#xff0c;后面发现是因为没有加依赖&#xff1a; <dependency><groupId>org.springframework.boot</groupId><artifactId>spring-boot-starter-test</artifactId>…

云计算技术高速发展,优势凸显

云计算是一种分布式计算技术&#xff0c;其特点是通过网络“云”将巨大的数据计算处理程序分解成无数个小程序&#xff0c;并通过多部服务器组成的系统进行处理和分析这些小程序&#xff0c;最后将结果返回给用户。它融合了分布式计算、效用计算、负载均衡、并行计算、网络存储…

MEME使用-motif分析(生物信息学工具-24)

01 背景 Motif分析是一种在生物信息学和计算生物学中广泛应用的技术&#xff0c;用于识别DNA、RNA或蛋白质序列中具有生物学功能的短保守序列模式&#xff08;motif&#xff09;。这些motif通常与特定的生物学功能相关&#xff0c;如DNA中的转录因子结合位点、RNA中的剪接位点…

C++ 计算凸包点的最小旋转矩形

RotateRect.h #include <vector>/** * brief 计算点集最小旋转外接矩形 */ class RotateRect { public:enum { CALIPERS_MAXHEIGHT 0, CALIPERS_MINAREARECT 1, CALIPERS_MAXDIST 2 };struct Point {float x, y;};using Points std::vector<Point>;struct Size…

微服务SpringCloud ES分布式全文搜索引擎简介 下载安装及简单操作入门

Elasticsearch ES简介 Elasticsearch&#xff08;简称ES&#xff09;是一个开源的分布式搜索和分析引擎&#xff0c;常用于全文搜索、日志存储和分析等场景。它构建在Apache Lucene搜索引擎库之上&#xff0c;提供了一个分布式的多租户能力&#xff0c;支持大规模的数据处理。…

网络编程5----初识http

1.1 请求和响应的格式 http协议和前边学过的传输层、网络层协议不同&#xff0c;它是“一问一答”形式的&#xff0c;所以要分为请求和响应两部分看待&#xff0c;同时&#xff0c;请求和响应的格式是不同的&#xff0c;我们来具体介绍一下。 1.1.1 请求 在介绍请求之前&…

将自己md文件发布到自己的博客园实现文件的持久化存储

上传markdown文件到博客园 目录 【0】需求原因【1】功能【2】环境【最佳实践测试】 &#xff08;1&#xff09;查看 Typora 设置&#xff08;2&#xff09;配置 pycnblog 配置文件 config.yaml&#xff08;3&#xff09;运行 pycnblog 中的文件 cnblog_markdown.cmd&#xff0…

自杀行为的神经生物学认识

自杀行为的神经生物学认识 编译 李升伟 隐藏在自杀行为背后的大脑生化机制正引领人类对自杀的认识从黑暗步入光明。科学家希望未来这些机制能带来更好的治疗和预防策略。 基斯 • 范希林根&#xff08;Cornelis Van Heeringen&#xff09;第一次遇见瓦莱丽&#xff08; Va…

Java用文件流mask文本文件某些特定字段

思路 在Java中&#xff0c;如果你想要掩码&#xff08;mask&#xff09;文本文件中的某些特定字段&#xff0c;你可以按照以下步骤进行&#xff1a; 读取文本文件内容。找到并识别需要掩码的字段。用特定的掩码字符&#xff08;如星号*&#xff09;替换这些字段。将修改后的内…

Leetcode Hot100之双指针

1. 移动零 题目描述 给定一个数组 nums&#xff0c;编写一个函数将所有 0 移动到数组的末尾&#xff0c;同时保持非零元素的相对顺序。 请注意 &#xff0c;必须在不复制数组的情况下原地对数组进行操作。解题思路 双指针遍历一遍即可解决: 我们定义了两个指针 i 和 j&#xf…

浏览器(Browser):轻量级浏览器,高效浏览新体验

在可的哥桌面&#xff08;Codigger Desktop&#xff09;&#xff0c;我们始终秉持创新精神&#xff0c;致力于提供卓越的用户体验。如今&#xff0c;我们激动地宣布一项全新功能的发布——轻量级浏览器Browser。这款浏览器的推出&#xff0c;正是我们对用户体验追求的再次体现&…

使用自签名 TLS 将 Dremio 连接到 MinIO

Dremio 是一个开源的分布式分析引擎&#xff0c;为数据探索、转换和协作提供简单的自助服务界面。Dremio 的架构建立在 Apache Arrow&#xff08;一种高性能列式内存格式&#xff09;之上&#xff0c;并利用 Parquet 文件格式实现高效存储。有关 Dremio 的更多信息&#xff0c;…

从艳彩山水到艳彩艺术 薛永年:郭泰来艳彩艺术填补了中国美术史的空白

薛永年先生 自6月12日开展以来&#xff0c;郭泰来现代艺术大展杭州如火如荼地进行着&#xff0c;吸引了众多艺术爱好者和专业人士前往。毫不夸张地说&#xff0c;总统和清洁工人都能在他的作品中找到自己心中的那一块共振带并与之产生强烈的共鸣&#xff0c;这便是郭泰来先生的…

目标跟踪算法(bytetrack)-tensorrt部署教程

一、本机安装python环境 conda create -n bytetrace_env python=3.8 activate bytetrace_env conda install pytorch torchvision cudatoolkit=10.1 -c检测GPU是否可用,不可用不行 import torch print(torch.cuda.is_available())安装bytetrack git clone https://github.c…

0.15元1.5Mhz-1.3A同步整流BUCK降压DCDC芯片MT3410(MT3410LB)

前言 国产同步整流DCDC&#xff0c;参考价格约0.15元。 特征 高效率&#xff1a;高达 96% 1.5MHz恒定频率操作 1.3A 输出电流 无需肖特基二极管 2.3V至7V输入电压范围 输出电压低至 0.6V PFM 模式可在轻负载下实现高效率 压差操作中的100%占空比 低静态电流&#xff1a;35μ…

网络爬虫设置代理服务器

目录 1&#xff0e;获取代理 IP 2&#xff0e;设置代理 IP 3. 检测代理 IP 的有效性 4. 处理异常 如果希望在网络爬虫程序中使用代理服务器&#xff0c;就需要为网络爬虫程序设置代理服务器。 设置代理服务器一般分为获取代理 IP 、设置代理 IP 两步。接下来&#xff0c;分…