iT邦幫忙

2021 iThome 鐵人賽

DAY 25
0
自我挑戰組

資料分析及AI深度學習-簡單基礎實作系列 第 25

DAY25:模型訓練DenseNet201

DenseNet201

  1. 簡介

    • DenseNet繼承了ResNet的短路連線機制,並調整為密集連接機制。密集連線比傳統的網路有更少的參數,因為不需要重新學習多餘的特徵圖。另外密集連線甚至有正則化的作用,可以減少過擬合的發生機率。

      ResNet短路連線

      圖片來源:https://codingnote.cc/zh-hk/p/153860/

      DenseNet密集連線

      圖片來源:https://codingnote.cc/zh-hk/p/153860/

    • 解決容易梯度消失的問題,增強特徵的傳播,使特徵重複利用,減少參數的數量。

    • 與ResNet不同之處,DenseNet是將其特徵進行併接(concatenate)方式輸入進下一層,而不是用ResNet的特徵相加(summation)。

      圖片來源:https://arxiv.org/pdf/1608.06993.pdf

    • 於2016年提出的Dense Block,以前饋方式(feed-forward)將每層連接到每個其他層。而具有L層的傳統卷積網絡具有L個連接,而每個層與其後一個層之間,又有 L(L + 1)/2 個直接連接。對於每一層,前面層的所有輸出,都成為後面層的輸入。

     

    圖片來源:https://medium.com/%E5%AD%B8%E4%BB%A5%E5%BB%A3%E6%89%8D/dense-cnn-%E5%AD%B8%E7%BF%92%E5%BF%83%E5%BE%97-%E6%8C%81%E7%BA%8C%E6%9B%B4%E6%96%B0-8cd8c65a6f3f

    • DenseNet特性總結:
         
      • 透過Dense Block可以提高特徵圖的利用效率,減少參數,降低梯度消失的發生機率。

      • 密集連線不需重新學習新的特徵圖,每次的input都含有之前層的資訊。

      • 是個輕量型,準確度又不錯的模型。

      • 在ImageNet上,DenseNet在保有準確率的情況下,模型的效能甚至超出VGG NET與ResNet。


訓練過程

  1. import 套件

    import torch
    import torch.nn as nn
    from torch.autograd import Variable
    from dataset import CaptchaData
    from torch.utils.data import DataLoader
    from torchvision.transforms import Compose, ToTensor,ColorJitter,RandomRotation,RandomAffine,Resize,Normalize,CenterCrop,RandomApply,RandomErasing
    import torchvision.models as models
    import time
    import copy
    
  2. dataset載入以及DataLoader

    train_dataset = CaptchaData('./mask_2/train',
                                 transform=transforms)
     train_data_loader = DataLoader(train_dataset, batch_size=batch_size, num_workers=0,
                                    shuffle=True, drop_last=True,pin_memory=True)
     test_data = CaptchaData('./mask_2/test',
                             transform=transforms_1)
     test_data_loader = DataLoader(test_data, batch_size=batch_size,
                                   num_workers=0, shuffle=True, drop_last=True,pin_memory=True)
    
    
  3. transforms的設置

    • train資料集設置有旋轉、圖像變換的transforms,而test我們則是設置只有轉換成tensor及標準化。
    transform_set = [ RandomRotation(degrees=10,fill=(255, 255, 255)),
    RandomAffine(degrees=(-10,+10), translate=(0.2, 0.2), fillcolor=(255, 255, 255)),
    RandomAffine(degrees=(-10,+10),scale=(0.8, 0.8),fillcolor=(255, 255, 255)),
    RandomAffine(degrees=(-10,+10),shear=(0, 0, 0, 20),fillcolor=(255, 255, 255))]
    
    transforms = Compose([RandomApply(transform_set, p=0.7),
                           ToTensor(),
                            Normalize((0.5,), (0.5,))
                           ])
    
    transforms_1 = Compose([
                             ToTensor(),
                             Normalize((0.5,), (0.5,))
                             ])
    
  4. 計算準確度

    def calculat_acc(output, target):
     output, target = output.view(-1, 800), target.view(-1, 800)
     output = nn.functional.softmax(output, dim=1)
     output = torch.argmax(output, dim=1)
     target = torch.argmax(target, dim=1)
     output, target = output.view(-1, 1), target.view(-1, 1)
     correct_list = []
     for i, j in zip(target, output):
         if torch.equal(i, j):
             correct_list.append(1)
         else:
             correct_list.append(0)
     acc = sum(correct_list) / len(correct_list)
     return acc
    
  5. 預訓練模型

    model = models.densenet201(num_classes=800)
    
  6. 儲存best_model(test_score最高的模型)

    if epoch > min_epoch and acc_best <= acc:
        acc_best = acc
        best_model = copy.deepcopy(model)
    
  7. 完整的code

import torch
import torch.nn as nn
from torch.autograd import Variable
from dataset import CaptchaData
from torch.utils.data import DataLoader
from torchvision.transforms import Compose, ToTensor,ColorJitter,RandomRotation,RandomAffine,Resize,Normalize,CenterCrop,RandomApply,RandomErasing
import torchvision.models as models
import time
import copy
import matplotlib.pyplot as plt
batch_size = 32
max_epoch = 40
model_path = './densenet201_mask.pth'
restor = False



def calculat_acc(output, target):
    output, target = output.view(-1, 800), target.view(-1, 800)
    output = nn.functional.softmax(output, dim=1)
    output = torch.argmax(output, dim=1)
    target = torch.argmax(target, dim=1)
    output, target = output.view(-1, 1), target.view(-1, 1)
    correct_list = []
    for i, j in zip(target, output):
        if torch.equal(i, j):
            correct_list.append(1)
        else:
            correct_list.append(0)
    acc = sum(correct_list) / len(correct_list)
    return acc


def train():
    acc_best = 0
    best_model = None
    min_epoch = 1

    transform_set = [ RandomRotation(degrees=10,fill=(255, 255, 255)),
                      RandomAffine(degrees=(-10,+10), translate=(0.2, 0.2), fillcolor=(255, 255, 255)),
                      RandomAffine(degrees=(-10,+10),scale=(0.8, 0.8),fillcolor=(255, 255, 255)),
                      RandomAffine(degrees=(-10,+10),shear=(0, 0, 0, 20),fillcolor=(255, 255, 255))
]
    transforms = Compose([ ToTensor(),
                           RandomApply(transform_set, p=0.7),
                           Normalize((0.5,), (0.5,))
                          ])

    transforms_1 = Compose([
                            ToTensor(),
                            # Normalize((0.5,), (0.5,))
                            ])

    train_dataset = CaptchaData(r'C:\Users\Frank\PycharmProjects\practice\mountain\清洗標籤final\train_nomask',
                                transform=transforms_1)
    train_data_loader = DataLoader(train_dataset, batch_size=batch_size, num_workers=0,
                                   shuffle=True, drop_last=True,pin_memory=True)
    test_data = CaptchaData(r'C:\Users\Frank\PycharmProjects\practice\mountain\清洗標籤final\test_nomask',
                            transform=transforms_1)
    test_data_loader = DataLoader(test_data, batch_size=batch_size,
                                  num_workers=0, shuffle=True, drop_last=True,pin_memory=True)
    print('load.........................')

    model = models.densenet201(num_classes=800)

    if torch.cuda.is_available():
        model.cuda()
    if restor:
        model.load_state_dict(torch.load(model_path))
    # optimizer = torch.optim.Adam(model.parameters(), lr=base_lr)
    optimizer = torch.optim.SGD(model.parameters(), lr=0.1, momentum=0.9)
    scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max =8 , eta_min=0, last_epoch=-1, verbose=False)
    criterion = nn.CrossEntropyLoss()
    acc_history_train = []
    loss_history_train = []
    loss_history_test = []
    acc_history_test = []
    for epoch in range(max_epoch):
        start_ = time.time()

        loss_history = []
        acc_history = []
        model.train()

        for img, target in train_data_loader:
            img = Variable(img)
            target = Variable(target)
            if torch.cuda.is_available():
                img = img.cuda()
                target = target.cuda()
            target = torch.tensor(target, dtype=torch.long)
            output = model(img)

            loss = criterion(output, torch.max(target,1)[1])
            optimizer.zero_grad()
            loss.backward()
            optimizer.step()

            acc = calculat_acc(output, target)
            acc_history.append(float(acc))
            loss_history.append(float(loss))
        scheduler.step()
        print('train_loss: {:.4}|train_acc: {:.4}'.format(
            torch.mean(torch.Tensor(loss_history)),
            torch.mean(torch.Tensor(acc_history)),
        ))
        acc_history_train.append((torch.mean(torch.Tensor(acc_history))).float())
        loss_history_train.append((torch.mean(torch.Tensor(loss_history))).float())
        loss_history = []
        acc_history = []
        model.eval()
        for img, target in test_data_loader:
            img = Variable(img)
            target = Variable(target)
            if torch.cuda.is_available():
                img = img.cuda()
                target = target.cuda()
            output = model(img)



            acc = calculat_acc(output, target)
            if epoch > min_epoch and acc_best <= acc:
                acc_best = acc
                best_model = copy.deepcopy(model)
            acc_history.append(float(acc))
            loss_history.append(float(loss))
        print('test_loss: {:.4}|test_acc: {:.4}'.format(
            torch.mean(torch.Tensor(loss_history)),
            torch.mean(torch.Tensor(acc_history)),
        ))
        acc_history_test.append((torch.mean(torch.Tensor(acc_history))).float())
        loss_history_test.append((torch.mean(torch.Tensor(loss_history))).float())
        print('epoch: {}|time: {:.4f}'.format(epoch, time.time() - start_))
        print("==============================================")
        torch.save(model.state_dict(), model_path)
        modelbest = best_model
        torch.save(modelbest, './densenet201_mask2.pth')
    # 畫出acc學習曲線
    acc = acc_history_train
    epoches = range(1, len(acc) + 1)
    val_acc = acc_history_test
    plt.plot(epoches, acc, 'b', label='Training acc')
    plt.plot(epoches, val_acc, 'r', label='Validation acc')
    plt.title('Training and validation accuracy')
    plt.legend(loc='lower right')
    plt.grid()
    # 儲存acc學習曲線
    plt.savefig('./acc_densenet201.png')
    plt.show()

    # 畫出loss學習曲線
    loss = loss_history_train
    val_loss = loss_history_test
    plt.plot(epoches, loss, 'b', label='Training loss')
    plt.plot(epoches, val_loss, 'r', label='Validation loss')
    plt.title('Training and validation loss')
    plt.legend(loc='upper right')
    plt.grid()
    # 儲存loss學習曲線
    plt.savefig('./loss_densenet201.png')
    plt.show()
if __name__ == "__main__":
    train()
    pass 

訓練結果

  1. 學習曲線

  2. 準確度

  3. 總結

    • 訓練epoch:20 epoches
    • 訓練總時數:1小時55分鐘
    • callback採紀錄最高test_score
    • test_score:95.03 %
    • 比ResNet的準確度高,且收斂速度較快,效果較好。

上一篇
DAY24:模型訓練ResNet152
下一篇
DAY26:判斷800字外為isnull的方法
系列文
資料分析及AI深度學習-簡單基礎實作30
圖片
  直播研討會
圖片
{{ item.channelVendor }} {{ item.webinarstarted }} |
{{ formatDate(item.duration) }}
直播中

尚未有邦友留言

立即登入留言