iT邦幫忙

第 11 屆 iThome 鐵人賽

DAY 15
0
AI & Data

Python機器學習介紹與實戰系列 第 15

DAY[15]-機器學習(6)交叉驗證

在上一章節我們提到,實作模型的過程需要在變異與偏誤之間權衡,本章我們就延續之前使用的糖尿病資料集,搭配sklearn提供的函式來進行簡單的交叉驗證吧!

驗證集的缺點

上一次介紹的過程中,有提到驗證集是預切分出來的結果,但是這樣會有很嚴重的問題在於切分的狀況,每次切分的狀況不同都會導致驗證集的效果有差異,並且因為驗證集並不會再訓練集當中,因此切分狀況也會影響到訓練的結果。

交叉驗證

交叉驗證是指透過不重複的切割方法,來創造多個驗證集以降低偏差的模式,這種模式可以非常有效的減少剛剛所說的問題,但伴隨而來的缺點就是訓練的時間會被拉的非常的長。
CV

交叉驗證實作

首先請執行這段程式碼來建立繪圖函式,不需要特別理解這個繪圖的內容,有興趣的讀者可以參與官方文件

# Scikit-Learn 
print(__doc__)

import numpy as np
import matplotlib.pyplot as plt
from sklearn.naive_bayes import GaussianNB
from sklearn.svm import SVC
from sklearn.datasets import load_digits
from sklearn.model_selection import learning_curve
from sklearn.model_selection import ShuffleSplit


def plot_learning_curve(estimator, title, X, y, ylim=None, cv=None,
                        n_jobs=1, train_sizes=np.linspace(.1, 1.0, 5)):
    """
    Generate a simple plot of the test and training learning curve.

    Parameters
    ----------
    estimator : object type that implements the "fit" and "predict" methods
        An object of that type which is cloned for each validation.

    title : string
        Title for the chart.

    X : array-like, shape (n_samples, n_features)
        Training vector, where n_samples is the number of samples and
        n_features is the number of features.

    y : array-like, shape (n_samples) or (n_samples, n_features), optional
        Target relative to X for classification or regression;
        None for unsupervised learning.

    ylim : tuple, shape (ymin, ymax), optional
        Defines minimum and maximum yvalues plotted.

    cv : int, cross-validation generator or an iterable, optional
        Determines the cross-validation splitting strategy.
        Possible inputs for cv are:
          - None, to use the default 3-fold cross-validation,
          - integer, to specify the number of folds.
          - An object to be used as a cross-validation generator.
          - An iterable yielding train/test splits.

        For integer/None inputs, if ``y`` is binary or multiclass,
        :class:`StratifiedKFold` used. If the estimator is not a classifier
        or if ``y`` is neither binary nor multiclass, :class:`KFold` is used.

        Refer :ref:`User Guide <cross_validation>` for the various
        cross-validators that can be used here.

    n_jobs : integer, optional
        Number of jobs to run in parallel (default 1).
    """
    plt.figure(figsize=(10,6))  #調整圖的大小
    plt.title(title)
    if ylim is not None:
        plt.ylim(*ylim)
    plt.xlabel("Training examples")
    plt.ylabel("Score")
    train_sizes, train_scores, test_scores = learning_curve(
        estimator, X, y, cv=cv, n_jobs=n_jobs, train_sizes=train_sizes)
    train_scores_mean = np.mean(train_scores, axis=1)
    train_scores_std = np.std(train_scores, axis=1)
    test_scores_mean = np.mean(test_scores, axis=1)
    test_scores_std = np.std(test_scores, axis=1)
    plt.grid()

    plt.fill_between(train_sizes, train_scores_mean - train_scores_std,
                     train_scores_mean + train_scores_std, alpha=0.1,
                     color="r")
    plt.fill_between(train_sizes, test_scores_mean - test_scores_std,
                     test_scores_mean + test_scores_std, alpha=0.1, color="g")
    plt.plot(train_sizes, train_scores_mean, 'o-', color="r",
             label="Training score")
    plt.plot(train_sizes, test_scores_mean, 'o-', color="g",
             label="Cross-validation score")

    plt.legend(loc="best")
    return plt

執行完這段程式碼之後,我們使用lightGBM來進行交叉驗證的實作,

from lightgbm import LGBMRegressor
from sklearn.model_selection import KFold,StratifiedKFold
lgbr = LGBMRegressor()
cv = KFold(n_splits=5, random_state=4, shuffle=True)
estimator = lgbr

plot_learning_curve(estimator, "lgbRegressor", dummies, total_data["target"].values, cv=cv, train_sizes=np.linspace(0.2, 1.0, 5))    

從這張圖我們可以很快速地察覺到兩點,第一點是訓練(紅線)與驗證(綠線)的距離在圖的最右端非常的遠,第二點是驗證集的準確度隨著樣本數變多逐漸下降,這兩點都是過擬合的現象,而我們可以看到訓練集的準確度十分的高,這也是不能單純仰賴訓練集去評估模型好壞的主因。

而要解決這個問題就要從模型的參數設置-「超參數(Hyperparameter)」來著手調整,而模型的參數設置如何調整呢?由於模型並沒有標準答案,每一份資料都有自己的答案,因此這個部分需要透過窮舉法來最佳化模型,通常要花費較久的時間,在這裡使用已經調整好的參數進行較好模型的示範。

from lightgbm import LGBMRegressor
from sklearn.model_selection import KFold,StratifiedKFold
lgbr = LGBMRegressor(boosting_type='gbdt',
                         verbose = 0,
                         learning_rate = 0.01,
                         num_leaves = 35,
                         feature_fraction=0.8,
                         bagging_fraction= 0.9,
                         bagging_freq= 8,
                         lambda_l1= 0.6,
                         lambda_l2= 0)
cv = KFold(n_splits=5, random_state=4, shuffle=True)
estimator = lgbr

plot_learning_curve(estimator, "lgbRegressor", dummies, total_data["target"].values, cv=cv, train_sizes=np.linspace(0.2, 1.0, 5))

從這個調整好的模型可以發現,準確度下降(偏誤提高)差距減少(變異縮小),並且驗證集沒有因為樣本數的提升而下降準確度,這個模型表現的比上方的那個模型要好的多,要提升準確度則應該從資料處理下手~


上一篇
DAY[14]-機器學習(5) 變異(variance)與偏誤(bias)
下一篇
DAY[16]-機器學習(7)超參數調整
系列文
Python機器學習介紹與實戰30
圖片
  直播研討會
圖片
{{ item.channelVendor }} {{ item.webinarstarted }} |
{{ formatDate(item.duration) }}
直播中

尚未有邦友留言

立即登入留言