大家好,我們是 AI . FREE Team - 人工智慧自由團隊,這一次的鐵人賽,自由團隊將從0到1 手把手教各位讀者學會 (1)Python基礎語法 (2)Python Web 網頁開發框架 – Django (3)Python網頁爬蟲 – 周易解夢網 (4)Tensorflow AI語言模型基礎與訓練 – LSTM (5)實際部屬AI解夢模型到Web框架上。
自由團隊的成立宗旨為開發AI/新科技的學習資源,提供各領域的學習者能夠跨域學習資料科學,並透過自主學習發展協槓職涯,結合智能應用到各式領域,無論是文、法、商、管、醫領域的朋友,都可以自由的學習AI技術。
AI . FREE Team 讀者專屬福利 → Python Basics 免費學習資源
keras
來實作上篇文章的題目%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import seaborn as sns
import warnings
from datetime import datetime
from matplotlib.colors import ListedColormap
from sklearn.datasets import make_classification, make_moons, make_circles
from sklearn.metrics import confusion_matrix, classification_report, mean_squared_error, mean_absolute_error, r2_score
from sklearn.linear_model import LogisticRegression
from sklearn.utils import shuffle
from keras.models import Sequential
from keras.layers import Dense, Dropout, BatchNormalization, Activation
from keras.optimizers import Adam
from keras.utils.np_utils import to_categorical
import keras.backend as K
from keras.wrappers.scikit_learn import KerasClassifier
def plot_data(X, y, figsize=None):
if not figsize:
figsize = (8, 6)
plt.figure(figsize=figsize)
plt.plot(X[y==0, 0], X[y==0, 1], 'y*', alpha=0.5, label=0)
plt.plot(X[y==1, 0], X[y==1, 1], 'g*', alpha=0.5, label=1)
plt.xlim((min(X[:, 0])-0.1, max(X[:, 0])+0.1))
plt.ylim((min(X[:, 1])-0.1, max(X[:, 1])+0.1))
plt.legend()
X, y = make_classification(n_samples=1000, n_features=2, n_redundant=0,
n_informative=2, random_state=50, n_clusters_per_class=1)
plot_data(X, y)
lr = LogisticRegression()
lr.fit(X, y)
print('LR coefficients:', lr.coef_)
print('LR intercept:', lr.intercept_)
plot_data(X, y)
limits = np.array([-2, 2])
boundary = -(lr.coef_[0][0] * limits + lr.intercept_[0]) / lr.coef_[0][1]
plt.plot(limits, boundary, "r-", linewidth=2)
plt.show()
model = Sequential()
model.add(Dense(units=1, input_shape=(2,), activation='sigmoid'))
model.summary()
Model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_1 (Dense) (None, 1) 3
=================================================================
Total params: 3
Trainable params: 3
Non-trainable params: 0
_________________________________________________________________
loss
和optimizer
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
loss
和optimizer
,還有一個metrics
,這個主要是在訓練過程中監看的指標history =model.fit(x=X, y=y, verbose=1, epochs=25)
Epoch 1/30
32/32 [==============================] - 0s 572us/step - loss: 0.1842 - accuracy: 0.9630
Epoch 2/30
32/32 [==============================] - 0s 613us/step - loss: 0.1783 - accuracy: 0.9670
Epoch 3/30
32/32 [==============================] - 0s 574us/step - loss: 0.1728 - accuracy: 0.9670
Epoch 4/30
32/32 [==============================] - 0s 592us/step - loss: 0.1677 - accuracy: 0.9690
Epoch 5/30
32/32 [==============================] - 0s 588us/step - loss: 0.1629 - accuracy: 0.9700
Epoch 6/30
32/32 [==============================] - 0s 562us/step - loss: 0.1585 - accuracy: 0.9700
Epoch 7/30
32/32 [==============================] - 0s 697us/step - loss: 0.1544 - accuracy: 0.9720
Epoch 8/30
32/32 [==============================] - 0s 588us/step - loss: 0.1506 - accuracy: 0.9720
Epoch 9/30
32/32 [==============================] - 0s 564us/step - loss: 0.1470 - accuracy: 0.9720
Epoch 10/30
32/32 [==============================] - 0s 552us/step - loss: 0.1437 - accuracy: 0.9720
Epoch 11/30
32/32 [==============================] - 0s 657us/step - loss: 0.1406 - accuracy: 0.9730
Epoch 12/30
32/32 [==============================] - 0s 594us/step - loss: 0.1378 - accuracy: 0.9730
Epoch 13/30
32/32 [==============================] - 0s 583us/step - loss: 0.1351 - accuracy: 0.9730
Epoch 14/30
32/32 [==============================] - 0s 650us/step - loss: 0.1324 - accuracy: 0.9730
Epoch 15/30
32/32 [==============================] - 0s 586us/step - loss: 0.1300 - accuracy: 0.9730
Epoch 16/30
32/32 [==============================] - 0s 588us/step - loss: 0.1276 - accuracy: 0.9730
Epoch 17/30
32/32 [==============================] - 0s 657us/step - loss: 0.1254 - accuracy: 0.9730
Epoch 18/30
32/32 [==============================] - 0s 581us/step - loss: 0.1232 - accuracy: 0.9730
Epoch 19/30
32/32 [==============================] - 0s 588us/step - loss: 0.1212 - accuracy: 0.9730
Epoch 20/30
32/32 [==============================] - 0s 572us/step - loss: 0.1193 - accuracy: 0.9730
Epoch 21/30
32/32 [==============================] - 0s 598us/step - loss: 0.1175 - accuracy: 0.9730
Epoch 22/30
32/32 [==============================] - 0s 582us/step - loss: 0.1157 - accuracy: 0.9730
Epoch 23/30
32/32 [==============================] - 0s 569us/step - loss: 0.1141 - accuracy: 0.9730
Epoch 24/30
32/32 [==============================] - 0s 587us/step - loss: 0.1125 - accuracy: 0.9730
Epoch 25/30
32/32 [==============================] - 0s 581us/step - loss: 0.1110 - accuracy: 0.9730
Epoch 26/30
32/32 [==============================] - 0s 570us/step - loss: 0.1095 - accuracy: 0.9730
Epoch 27/30
32/32 [==============================] - 0s 633us/step - loss: 0.1081 - accuracy: 0.9730
Epoch 28/30
32/32 [==============================] - 0s 593us/step - loss: 0.1068 - accuracy: 0.9740
Epoch 29/30
32/32 [==============================] - 0s 595us/step - loss: 0.1055 - accuracy: 0.9740
Epoch 30/30
32/32 [==============================] - 0s 597us/step - loss: 0.1042 - accuracy: 0.9750
keras
的訓練過程,從定義到訓練其實可以發現都比pytorch
精簡很多,僅僅用了一行fit
就把全部向前向後傳播做完了history
,一些功能性的APIpd.DataFrame(history.history)
epoch
設成了30次,可以看到loss
與accuracy
會儲存在這個叫history
的方法裡面def plot_loss_accuracy(history):
historydf = pd.DataFrame(history.history, index=history.epoch)
plt.figure(figsize=(8, 6))
historydf.plot(ylim=(0, max(1, historydf.values.max())))
loss = history.history['loss'][-1]
acc = history.history['accuracy'][-1]
plt.title('Loss: %.3f, Accuracy: %.3f' % (loss, acc))
plot_loss_accuracy(history)
loss
往下、accuracy
往上正是我們所樂見的def plot_decision_boundary(func, X, y, figsize=(9, 6)):
amin, bmin = X.min(axis=0) - 0.1
amax, bmax = X.max(axis=0) + 0.1
hticks = np.linspace(amin, amax, 101)
vticks = np.linspace(bmin, bmax, 101)
aa, bb = np.meshgrid(hticks, vticks)
ab = np.c_[aa.ravel(), bb.ravel()]
c = func(ab)
cc = c.reshape(aa.shape)
cm = plt.cm.RdBu
cm_bright = ListedColormap(['#deff17', '#5be044'])
fig, ax = plt.subplots(figsize=figsize)
contour = plt.contourf(aa, bb, cc, cmap=cm, alpha=0.8)
ax_c = fig.colorbar(contour)
ax_c.set_label("$P(y = 1)$")
ax_c.set_ticks([0, 0.25, 0.5, 0.75, 1])
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=cm_bright)
plt.xlim(amin, amax)
plt.ylim(bmin, bmax)
plot_decision_boundary(lambda x: model.predict(x), X, y)
y_pred = model.predict_classes(X, verbose=0)
print(classification_report(y, y_pred))
precision recall f1-score support
0 1.00 0.96 0.98 499
1 0.96 1.00 0.98 501
accuracy 0.98 1000
macro avg 0.98 0.98 0.98 1000
weighted avg 0.98 0.98 0.98 1000
X, y = make_moons(n_samples=1000, noise=0.05, random_state=0)
plot_data(X, y)
model = Sequential()
model.add(Dense(1, input_shape=(2,), activation='sigmoid'))
model.compile('adam', 'binary_crossentropy', metrics=['accuracy'])
history = model.fit(X, y, verbose=0, epochs=100)
plot_loss_accuracy(history)
pytorch
的實作差不多,我們視覺化就知道了y_pred = model.predict_classes(X, verbose=0)
print(classification_report(y, y_pred))
precision recall f1-score support
0 0.79 0.82 0.80 500
1 0.81 0.78 0.80 500
accuracy 0.80 1000
macro avg 0.80 0.80 0.80 1000
weighted avg 0.80 0.80 0.80 1000
model = Sequential()
model.add(Dense(4, input_shape=(2,), activation='tanh'))
model.add(Dense(2, activation='tanh'))
model.add(Dense(1, activation='sigmoid'))
model.compile(Adam(lr=0.01), 'binary_crossentropy', metrics=['accuracy'])
history = model.fit(X, y, verbose=0, epochs=100)
plot_loss_accuracy(history)
plot_decision_boundary(lambda x: model.predict(x), X, y)
y_pred = model.predict_classes(X, verbose=0)
print(classification_report(y, y_pred))
precision recall f1-score support
0 1.00 1.00 1.00 500
1 1.00 1.00 1.00 500
accuracy 1.00 1000
macro avg 1.00 1.00 1.00 1000
weighted avg 1.00 1.00 1.00 1000
pytorch
的模型一樣,相同的模型也可以達到100%keras
相對於pytorch
來說相對簡單,但是方便的東西,伴隨的是很多東西要使用他的API,筆者認為這裡有利有弊,有些功能可以套件沒有提供,但是你又不能使用非這個套件的方法時,就會相當頭痛,pytorch
是使用自由,但是筆者認為相對困難對於新手ˋˊ,筆者也是摸了一陣子才有辦法做出來自由團隊 官方網站:https://aifreeblog.herokuapp.com/
自由團隊 Github:https://github.com/AI-FREE-Team/
自由團隊 粉絲專頁:https://www.facebook.com/AI.Free.Team/
自由團隊 IG:https://www.instagram.com/aifreeteam/
自由團隊 Youtube:https://www.youtube.com/channel/UCjw6Kuw3kwM_il39NTBJVTg/
文章同步發布於:自由團隊部落格
(想看更多文章?學習更多AI知識?敬請鎖定自由團隊的頻道!)