文章目录
- model.compile()
-
- optimizer
- loss
- metrics
- model.fit()
-
- 基础
- 进阶:Validation
- 进阶:Callbacks
- model.evaluate()
- model.predict()
model.compile()
model.compile(optimizer=keras.optimizers.RMSprop(), # optimizer='rmsprop'loss=keras.losses.SparseCategoricalCrossentropy(), # loss='sparse_categorical_crossentropy'metrics=["accuracy"],
)
optimizer
RMSprop() # 'rmsprop'SGD() # 'sgd'Adam() # "adam"
- learning_rate=0.01
- momentum=0.9
- nesterov=True
loss
MeanSquaredError() # "mse"CategoricalCrossentropy() # 'categorical_crossentropy'SparseCategoricalCrossentropy() # "sparse_categorical_crossentropy"KLDivergence() # "kl_divergence"CosineSimilarity()
- from_logits=True
metrics
"acc" # "accuracy"AUC()Precision()Recall()MeanAbsoluteError()MeanAbsolutePercentageError()CategoricalAccuracy()SparseCategoricalAccuracy() # "sparse_categorical_accuracy"
model.fit()
fit()
会自己打印训练进度、训练水平。
history = model.fit(x_train, y_train, batch_size=64, epochs=2, validation_split=0.2
)
''' Epoch 1/2 750/750 [==============================] - 2s 2ms/step - loss: 0.5648 - accuracy: 0.8473 - val_loss: 0.1793 - val_accuracy: 0.9474 Epoch 2/2 750/750 [==============================] - 1s 1ms/step - loss: 0.1686 - accuracy: 0.9506 - val_loss: 0.1398 - val_accuracy: 0.9576 313/313 - 0s - loss: 0.1401 - accuracy: 0.9580 '''
Epoch的进度条750
表示的批数,而不是样本个数,训练是一批一批样本的。
基础
- 使用 Numpy data 格式的数据集,则
fit()
要指定batch_size
;
# Train the model for 1 epoch from Numpy data
batch_size = 64
history = model.fit(x_train, y_train, batch_size=batch_size, epochs=1
)
- 使用
tf.data.Dataset
格式的数据集,则fit()
不用指定batch_size
,因为tf.data.Dataset
已经指定好了(且必须指定,不然fit()
指定了也还是报错)。
# Train the model for 1 epoch using a dataset
dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train)).batch(batch_size) # 必须在这里指定batch(batch_size)
history = model.fit(dataset, epochs=1)
进阶:Validation
- Numpy data:在
fit()
中使用validation_split
,从训练集中划分一部分出来当作验证集。
model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, validation_split=0.1)
tf.data.Dataset
:不支持在fit()
中使用validation_split
划分训练集,只能用validation_data
指定独立的验证集。
model.fit(train_dataset, epochs=epochs, validation_data=val_dataset)
进阶:Callbacks
You can also use callbacks to do things like periodically changing the learning of your optimizer, streaming metrics to a Slack bot, sending yourself an email notification when training is complete, etc.
- 在每个epoch结束时保存模型,如同
model.save("path_to_my_model")
一样。
path_checkpoint = "path_to_my_model_{epoch}"
modelckpt_callback = keras.callbacks.ModelCheckpoint(filepath=path_checkpoint,save_freq='epoch' # 每个epoch结束
)
- 早早停止
es_callback = keras.callbacks.EarlyStopping(monitor="val_loss", # 检测值min_delta=0, patience=5 # 如果5个epoch还没提升,那就停
)
- 存储最佳模型的权重
path_checkpoint = "model_checkpoint.h5"
modelckpt_callback = keras.callbacks.ModelCheckpoint(monitor="val_loss", # 检测值filepath=path_checkpoint,verbose=1,save_weights_only=True, # 只保存权重save_best_only=True, # 只保存最好的
)
model.evaluate()
# score = model.evaluate(test_dataset)
score = model.evaluate(x_test, y_test)
print("Test loss:", score[0])
print("Test accuracy:", score[1])
model.predict()
# predictions = model.predict(x_test, batch_size=batch_size)
predictions = model.predict(x_test)