我如果直接訓練的話,把 model 輸出後再用 model = load_model('./my_model.h5')
讀取是正常的
但如果把註解的部分拿掉,model.fit
換成使用註解的,訓練完後再把 model 載入就會出問題?
請問怎麼解決? 用了半天用不出來QQ
1_mnist.py
import tensorflow as tf
from tensorflow.keras.preprocessing.image import ImageDataGenerator
mnist = tf.keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train/255.0, x_test/255.0
# x_train = x_train.reshape(x_train.shape[0], 28, 28, 1)
# x_test = x_test.reshape(x_test.shape[0], 28, 28, 1)
# image_gen_train = ImageDataGenerator(
# rescale=1./1.,
# rotation_range=45,
# width_shift_range=15,
# height_shift_range=15,
# horizontal_flip=True,
# zoom_range=0.5
# )
# image_gen_train.fit(x_train)
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(optimizer='adam',
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False),
metrics=['sparse_categorical_accuracy'])
# model.fit(image_gen_train.flow(x_train, y_train, batch_size=32), epochs=5)
model.fit(x_train, y_train, batch_size=32, epochs=5)
model.summary()
model.save('my_model.h5')
使用註解掉的程式碼會出現以下錯誤
raise ValueError('The last dimension of the inputs to `Dense` '
ValueError: The last dimension of the inputs to `Dense` should be defined. Found `None`.
Flatten 要加 input_shape 如下:
tf.keras.layers.Flatten(input_shape=(28, 28))
可以了,謝謝
不過為什麼要加 input_shape 呢?
Flatten 不是單純把資料展開嗎?
模型一定要設定input的規格,因此,第一層需有 input_shape,或者
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation='softmax')
])
x = tf.keras.layers.Input(shape=(28, 28))
# 或 x = tf.Variable(tf.random.truncated_normal([28, 28]))
y = model(x)
我單純讀存起來的 Model 沒問題
但是 Model 怪怪的,不知道是不是沒 INPUT 或是 fit_generator 的關係
可能要請其他人解答
可以參考下面的官方連結
一開始的
Epoch 1/5
60000/60000 [==============================] - 3s 56us/sample - loss: 0.2595 - sparse_categorical_accuracy: 0.9258
Epoch 2/5
60000/60000 [==============================] - 3s 53us/sample - loss: 0.1157 - sparse_categorical_accuracy: 0.9660
Epoch 3/5
60000/60000 [==============================] - 3s 54us/sample - loss: 0.0799 - sparse_categorical_accuracy: 0.9757
Epoch 4/5
60000/60000 [==============================] - 3s 54us/sample - loss: 0.0594 - sparse_categorical_accuracy: 0.9819
Epoch 5/5
60000/60000 [==============================] - 3s 53us/sample - loss: 0.0457 - sparse_categorical_accuracy: 0.9861
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
flatten (Flatten) multiple 0
_________________________________________________________________
dense (Dense) multiple 100480
_________________________________________________________________
dense_1 (Dense) multiple 1290
=================================================================
Total params: 101,770
Trainable params: 101,770
Non-trainable params: 0
用 ImageDataGenerator
1875/1875 [==============================] - 24s 13ms/step - loss: 2.1724 - sparse_categorical_accuracy: 0.1695
Epoch 2/5
1875/1875 [==============================] - 23s 12ms/step - loss: 2.0163 - sparse_categorical_accuracy: 0.2663
Epoch 3/5
1875/1875 [==============================] - 22s 12ms/step - loss: 1.9251 - sparse_categorical_accuracy: 0.3191
Epoch 4/5
1875/1875 [==============================] - 22s 12ms/step - loss: 1.8789 - sparse_categorical_accuracy: 0.3411
Epoch 5/5
1875/1875 [==============================] - 22s 12ms/step - loss: 1.8455 - sparse_categorical_accuracy: 0.3585
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
flatten (Flatten) multiple 0
_________________________________________________________________
dense (Dense) multiple 100480
_________________________________________________________________
dense_1 (Dense) multiple 1290
=================================================================
Total params: 101,770
Trainable params: 101,770
Non-trainable params: 0
_________________________________________________________________
參考官方範例
开始使用 Keras Sequential 顺序模型
from keras.models import Sequential
from keras.layers import Dense, Activation
model = Sequential([
Dense(32, input_shape=(784,)),
Activation('relu'),
Dense(10),
Activation('softmax'),
])
(x_train, y_train), (x_test, y_test) = cifar10.load_data()
y_train = np_utils.to_categorical(y_train, num_classes)
y_test = np_utils.to_categorical(y_test, num_classes)
datagen = ImageDataGenerator(
featurewise_center=True,
featurewise_std_normalization=True,
rotation_range=20,
width_shift_range=0.2,
height_shift_range=0.2,
horizontal_flip=True)
# 计算特征归一化所需的数量
# (如果应用 ZCA 白化,将计算标准差,均值,主成分)
datagen.fit(x_train)
# 使用实时数据增益的批数据对模型进行拟合:
model.fit_generator(datagen.flow(x_train, y_train, batch_size=32),
steps_per_epoch=len(x_train) / 32, epochs=epochs)
# 这里有一个更 「手动」的例子
for e in range(epochs):
print('Epoch', e)
batches = 0
for x_batch, y_batch in datagen.flow(x_train, y_train, batch_size=32):
model.fit(x_batch, y_batch)
batches += 1
if batches >= len(x_train) / 32:
# 我们需要手动打破循环,
# 因为生成器会无限循环
break