DAY 18
0

## 【18】tensorflow 訓練技巧：模型如何輕鬆又方便地做 regularization 篇

``````REGULARIZER = tf.contrib.layers.l2_regularizer(0.1)
``````

``````x = tf.placeholder(shape=[None, 2], dtype=tf.float32, name='x')

w = tf.get_variable(
name='weight',
shape=(2, 4),
dtype=tf.float32,
regularizer=REGULARIZER)
b = tf.get_variable(
name='bias',
shape=4,
dtype=tf.float32,
regularizer=REGULARIZER)

assign_w = tf.assign(w, WEIGHT_VALUE)
assign_b = tf.assign(b, BIAS_VALUE)

out = tf.matmul(x, w) + b
``````

``````with tf.Session() as sess:
sess.run([assign_w, assign_b])

result = sess.run(out, feed_dict={x: INPUT_VALUE})

wd_loss = tf.reduce_sum(
tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES), name='wd_loss')
weight_loss = sess.run(wd_loss)

print(f'result:{result}, weight_loss:{weight_loss}')
``````

OK，那如果你說，如果今天我不想用那麼底層的方法，想用 tf.layers 來實作，那我該如何把我的 dense layer 權重丟到 tf.GraphKeys.REGULARIZATION_LOSSES 呢？其實道理一樣，tensorflow API 都有做接口供你用。

``````x = tf.placeholder(shape=[None, 2], dtype=tf.float32, name='x')

weight_init = tf.constant_initializer(WEIGHT_VALUE)
bias_init = tf.constant_initializer(BIAS_VALUE)
out = tf.layers.dense(x, 4, kernel_initializer=weight_init,
bias_initializer=bias_init,
kernel_regularizer=REGULARIZER,
bias_regularizer=REGULARIZER)
``````

``````wd_loss = tf.reduce_sum(
tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES), name='wd_loss')
weight_loss = sess.run(wd_loss)

``````

``````raw_value, raw_lose = raw_dense()
tf.reset_default_graph()
layers_value, layers_loss = layer_dense()

assert np.alltrue(raw_value == layers_value)
assert raw_lose == layers_loss
``````

github原始碼