iT邦幫忙

第 12 屆 iThome 鐵人賽

DAY 28
0
自我挑戰組

零基礎成為 AI 解夢大師秘笈系列 第 28

【零基礎成為 AI 解夢大師秘笈】Day28 - 周易解夢之人工智慧(9)

人工智慧9

前言

系列文章簡介

大家好,我們是 AI . FREE Team - 人工智慧自由團隊,這一次的鐵人賽,自由團隊將從0到1 手把手教各位讀者學會 (1)Python基礎語法 (2)Python Web 網頁開發框架 – Django (3)Python網頁爬蟲 – 周易解夢網 (4)Tensorflow AI語言模型基礎與訓練 – LSTM (5)實際部屬AI解夢模型到Web框架上。

為什麼技術要從零開始寫起

自由團隊的成立宗旨為開發AI/新科技的學習資源,提供各領域的學習者能夠跨域學習資料科學,並透過自主學習發展協槓職涯,結合智能應用到各式領域,無論是文、法、商、管、醫領域的朋友,都可以自由的學習AI技術。

資源

AI . FREE Team 讀者專屬福利 → Python Basics 免費學習資源

實作 part2

這次深度學習框架我們採用keras(比較快XD)

請自行延續上一篇前處理的程式碼

導入套件

from keras.models import Model, Input
from keras.layers import LSTM, Embedding, Dense, TimeDistributed, Dropout, Bidirectional,GRU

model

model = tf.keras.Sequential([
    Embedding(len(words), 64),
    Bidirectional(LSTM(64)),
    Dense(64, activation='relu'),
    Dense(5,activation = 'softmax')
])

看一下模型的架構

model.summary()
Model: "sequential_5"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
embedding_7 (Embedding)      (None, None, 64)          21440     
_________________________________________________________________
bidirectional_7 (Bidirection (None, 128)               66048     
_________________________________________________________________
dense_11 (Dense)             (None, 64)                8256      
_________________________________________________________________
dense_12 (Dense)             (None, 5)                 325       
=================================================================
Total params: 96,069
Trainable params: 96,069
Non-trainable params: 0
_________________________________________________________________

優化器與損失函數

model.compile(optimizer="adam", loss="categorical_crossentropy", metrics=["accuracy"])

categorical_crossentropy是一種針對multi-class or multi-label的loss_function

training

這裡記得要將資料都轉為numpy的形式,其他的參數大家都可以亂調看看

batch_size是一次丟進去的數量

epochs是訓練完整資料集的次數

verbose為是否啟用進度條

如果你有驗證及,也可以用validation_data = (np.array(X_valid_int), np.array(y_oh_valid))

history = model.fit(np.array(X_train_int), np.array(y_oh_train), batch_size=4, epochs=20, verbose=1
                    , validation_data = (np.array(X_valid_int), np.array(y_oh_valid)),
                    )
Epoch 1/20
46/46 [==============================] - 1s 28ms/step - loss: 1.5894 - accuracy: 0.2732 - val_loss: 1.5550 - val_accuracy: 0.3929
Epoch 2/20
46/46 [==============================] - 0s 10ms/step - loss: 1.5250 - accuracy: 0.3880 - val_loss: 1.4884 - val_accuracy: 0.3750
Epoch 3/20
46/46 [==============================] - 0s 11ms/step - loss: 1.3422 - accuracy: 0.3825 - val_loss: 1.2045 - val_accuracy: 0.5179
Epoch 4/20
46/46 [==============================] - 0s 11ms/step - loss: 0.8867 - accuracy: 0.6557 - val_loss: 1.2385 - val_accuracy: 0.6429
Epoch 5/20
46/46 [==============================] - 0s 11ms/step - loss: 0.4955 - accuracy: 0.7978 - val_loss: 1.6667 - val_accuracy: 0.6786
Epoch 6/20
46/46 [==============================] - 0s 11ms/step - loss: 0.3452 - accuracy: 0.8361 - val_loss: 1.4240 - val_accuracy: 0.7500
Epoch 7/20
46/46 [==============================] - 0s 11ms/step - loss: 0.2393 - accuracy: 0.9016 - val_loss: 1.6240 - val_accuracy: 0.7679
Epoch 8/20
46/46 [==============================] - 0s 11ms/step - loss: 0.1503 - accuracy: 0.9454 - val_loss: 2.0856 - val_accuracy: 0.7679
Epoch 9/20
46/46 [==============================] - 1s 11ms/step - loss: 0.1088 - accuracy: 0.9508 - val_loss: 2.0742 - val_accuracy: 0.7321
Epoch 10/20
46/46 [==============================] - 0s 10ms/step - loss: 0.0508 - accuracy: 0.9891 - val_loss: 2.0070 - val_accuracy: 0.7857
Epoch 11/20
46/46 [==============================] - 0s 11ms/step - loss: 0.0334 - accuracy: 0.9945 - val_loss: 2.1783 - val_accuracy: 0.7857
Epoch 12/20
46/46 [==============================] - 0s 11ms/step - loss: 0.0079 - accuracy: 1.0000 - val_loss: 2.3317 - val_accuracy: 0.7857
Epoch 13/20
46/46 [==============================] - 1s 11ms/step - loss: 0.0046 - accuracy: 1.0000 - val_loss: 2.3419 - val_accuracy: 0.7857
Epoch 14/20
46/46 [==============================] - 0s 11ms/step - loss: 0.0023 - accuracy: 1.0000 - val_loss: 2.4288 - val_accuracy: 0.7857
Epoch 15/20
46/46 [==============================] - 0s 11ms/step - loss: 0.0015 - accuracy: 1.0000 - val_loss: 2.4866 - val_accuracy: 0.7857
Epoch 16/20
46/46 [==============================] - 0s 11ms/step - loss: 0.0012 - accuracy: 1.0000 - val_loss: 2.5414 - val_accuracy: 0.7857
Epoch 17/20
46/46 [==============================] - 1s 11ms/step - loss: 9.3662e-04 - accuracy: 1.0000 - val_loss: 2.6041 - val_accuracy: 0.7857
Epoch 18/20
46/46 [==============================] - 0s 11ms/step - loss: 8.3006e-04 - accuracy: 1.0000 - val_loss: 2.6372 - val_accuracy: 0.7857
Epoch 19/20
46/46 [==============================] - 1s 11ms/step - loss: 6.8749e-04 - accuracy: 1.0000 - val_loss: 2.6845 - val_accuracy: 0.7857
Epoch 20/20
46/46 [==============================] - 0s 11ms/step - loss: 5.7524e-04 - accuracy: 1.0000 - val_loss: 2.7143 - val_accuracy: 0.7857

視覺化成效

我們使用matplotlib的折線圖plt.plot

hist = pd.DataFrame(history.history)
plt.figure(figsize=(12,12))
plt.plot(hist["accuracy"])
plt.plot(hist["val_accuracy"])
plt.show()

測試

idx = 5
test_pred = model.predict(np.array(X_valid_int)[idx])
anwser = max(sum(test_pred))
for index,i in enumerate(sum(test_pred)):
  if i == anwser:
    print(f'sentence : {[idx2word[k] for k in X_valid_int[idx]]}')
    print(f'predict : {label_to_emoji(index)}')
sentence : ['he', 'is', 'a', 'good', 'friend', 'pad', 'pad', 'pad', 'pad', 'pad']
predict : ❤️

大家可以隨機調用idx來測試不同的句子

想更深入認識 AI . FREE Team ?

自由團隊 官方網站:https://aifreeblog.herokuapp.com/
自由團隊 Github:https://github.com/AI-FREE-Team/
自由團隊 粉絲專頁:https://www.facebook.com/AI.Free.Team/
自由團隊 IG:https://www.instagram.com/aifreeteam/
自由團隊 Youtube:https://www.youtube.com/channel/UCjw6Kuw3kwM_il39NTBJVTg/

文章同步發布於:自由團隊部落格
(想看更多文章?學習更多AI知識?敬請鎖定自由團隊的頻道!)


上一篇
【零基礎成為 AI 解夢大師秘笈】Day27 - 周易解夢之人工智慧(8)
下一篇
【零基礎成為 AI 解夢大師秘笈】Day29 - 周易解夢之人工智慧(10)
系列文
零基礎成為 AI 解夢大師秘笈30

尚未有邦友留言

立即登入留言