iT邦幫忙

2021 iThome 鐵人賽

DAY 24
0
自我挑戰組

資料分析及AI深度學習-簡單基礎實作系列 第 24

DAY24:模型訓練ResNet152

  • 分享至 

  • xImage
  •  

ResNet

  1. 簡介

    • 在當時的CNN中,都是較淺層的設計,較深層的訓練未必會帶來正面效果,容易訓練不起來,帶來更差的效果。在2015年提出的ResNet改變了CNN,其中residual learning可以讓深層網路的訓練更加容易,為CNN的帶來深層網路的時代。
  2. Residual / Bottleneck Block

    • Block的設計(左圖),在殘差傳入某一層之前,將其殘差相加,這方法簡單,又能使深層網路訓練更為容易。ResNet設計出了Bottleneck Block(右圖)將其降低了寬度,減少了運算量。

      圖片來源:Deep Residual Learning for Image Recognition
  3. 短路連線(shortcuts)

    • 透過每個層與前面的幾層的元素相加,有助於訓練過程中的梯度反向傳播。

      圖片來源:https://iter01.com/568757.html
  4. ResNet解決了甚麼問題?

    • ResNet的出現,解決了因為網路層數加深,而訓練效果變差的問題。為何訓練不起來,當網路層數越深,梯度爆炸或梯度消失的機率發生就越高,透過Batch normalization的方法,有緩解了這個問題。

    • 但由於上述狀況仍無法解決此問題,論文提出了恆等映射(Identity mapping)的方法,增加網路層數,但訓練的誤差不會增加。詳細可以參考這篇文章(傳送門)


訓練過程

  1. import的套件

    import torch
    import torch.nn as nn
    from torch.autograd import Variable
    from dataset import CaptchaData
    from torch.utils.data import DataLoader
    from torchvision.transforms import Compose, ToTensor,ColorJitter,RandomRotation,RandomAffine,Resize,Normalize,CenterCrop,RandomApply,RandomErasing
    import torchvision.models as models
    import time
    import copy
    
  2. dataset載入以及DataLoader

    train_dataset = CaptchaData('./mask_2/train',
                                 transform=transforms)
     train_data_loader = DataLoader(train_dataset, batch_size=batch_size, num_workers=0,
                                    shuffle=True, drop_last=True,pin_memory=True)
     test_data = CaptchaData('./mask_2/test',
                             transform=transforms_1)
     test_data_loader = DataLoader(test_data, batch_size=batch_size,
                                   num_workers=0, shuffle=True, drop_last=True,pin_memory=True)
    
  3. transforms的設置

    • train資料集設置有旋轉、圖像變換的transforms,而test我們則是設置只有轉換成tensor及標準化。
    transform_set = [ RandomRotation(degrees=10,fill=(255, 255, 255)),
    RandomAffine(degrees=(-10,+10), translate=(0.2, 0.2), fillcolor=(255, 255, 255)),
    RandomAffine(degrees=(-10,+10),scale=(0.8, 0.8),fillcolor=(255, 255, 255)),
    RandomAffine(degrees=(-10,+10),shear=(0, 0, 0, 20),fillcolor=(255, 255, 255))]
    
    transforms = Compose([RandomApply(transform_set, p=0.7),
                           ToTensor(),
                            Normalize((0.5,), (0.5,))
                           ])
    
    transforms_1 = Compose([
                             ToTensor(),
                             Normalize((0.5,), (0.5,))
                             ])
    
  4. 計算準確度

    def calculat_acc(output, target):
     output, target = output.view(-1, 800), target.view(-1, 800)
     output = nn.functional.softmax(output, dim=1)
     output = torch.argmax(output, dim=1)
     target = torch.argmax(target, dim=1)
     output, target = output.view(-1, 1), target.view(-1, 1)
     correct_list = []
     for i, j in zip(target, output):
         if torch.equal(i, j):
             correct_list.append(1)
         else:
             correct_list.append(0)
     acc = sum(correct_list) / len(correct_list)
     return acc
    
  5. 預訓練模型

    model = models.resnet152(num_classes=800)
    
  6. 儲存best_model(test_score最高的模型)

    if epoch > min_epoch and acc_best <= acc:
        acc_best = acc
        best_model = copy.deepcopy(model)
    
  7. 完整的code

import torch
import torch.nn as nn
from torch.autograd import Variable
from dataset import CaptchaData
from torch.utils.data import DataLoader
from torchvision.transforms import Compose, ToTensor,ColorJitter,RandomRotation,RandomAffine,Resize,Normalize,CenterCrop,RandomApply,RandomErasing
import torchvision.models as models
import time
import copy
import matplotlib.pyplot as plt
batch_size = 32
base_lr = 0.01
max_epoch = 25
model_path = './resnet_mask2.pth'
restor = False



def calculat_acc(output, target):
    output, target = output.view(-1, 800), target.view(-1, 800)
    output = nn.functional.softmax(output, dim=1)
    output = torch.argmax(output, dim=1)
    target = torch.argmax(target, dim=1)
    output, target = output.view(-1, 1), target.view(-1, 1)
    correct_list = []
    for i, j in zip(target, output):
        if torch.equal(i, j):
            correct_list.append(1)
        else:
            correct_list.append(0)
    acc = sum(correct_list) / len(correct_list)
    return acc


def train():
    acc_best = 0
    best_model = None
    min_epoch = 1

    transform_set = [ RandomRotation(degrees=10,fill=(255, 255, 255)),
                      RandomAffine(degrees=(-10,+10), translate=(0.2, 0.2), fillcolor=(255, 255, 255)),
                      RandomAffine(degrees=(-10,+10),scale=(0.8, 0.8),fillcolor=(255, 255, 255)),
                      RandomAffine(degrees=(-10,+10),shear=(0, 0, 0, 20),fillcolor=(255, 255, 255))
]
    transforms = Compose([ ToTensor(),
                           RandomApply(transform_set, p=0.7),
                           Normalize((0.5,), (0.5,))
                          ])

    transforms_1 = Compose([
                            ToTensor(),
                            Normalize((0.5,), (0.5,))
                            ])

    train_dataset = CaptchaData(r'C:\Users\Frank\PycharmProjects\practice\mountain\清洗標籤final\train_nomask',
                                transform=transforms)
    train_data_loader = DataLoader(train_dataset, batch_size=batch_size, num_workers=0,
                                   shuffle=True, drop_last=True,pin_memory=True)
    test_data = CaptchaData(r'C:\Users\Frank\PycharmProjects\practice\mountain\清洗標籤final\test_nomask',
                            transform=transforms_1)
    test_data_loader = DataLoader(test_data, batch_size=batch_size,
                                  num_workers=0, shuffle=True, drop_last=True,pin_memory=True)
    print('load.........................')

    model = models.resnet152(num_classes=800)

    if torch.cuda.is_available():
        model.cuda()
    if restor:
        model.load_state_dict(torch.load(model_path))

    optimizer = torch.optim.SGD(model.parameters(), lr=0.1, momentum=0.8)
    scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max =10 , eta_min=0, last_epoch=-1, verbose=False)
    criterion = nn.CrossEntropyLoss()
    acc_history_train = []
    loss_history_train = []
    loss_history_test = []
    acc_history_test = []
    for epoch in range(max_epoch):
        start_ = time.time()

        loss_history = []
        acc_history = []
        model.train()

        for img, target in train_data_loader:
            img = Variable(img)
            target = Variable(target)
            if torch.cuda.is_available():
                img = img.cuda()
                target = target.cuda()
            target = torch.tensor(target, dtype=torch.long)
            output = model(img)

            loss = criterion(output, torch.max(target,1)[1])
            optimizer.zero_grad()
            loss.backward()
            optimizer.step()

            acc = calculat_acc(output, target)
            acc_history.append(float(acc))
            loss_history.append(float(loss))
        scheduler.step()
        print('train_loss: {:.4}|train_acc: {:.4}'.format(
            torch.mean(torch.Tensor(loss_history)),
            torch.mean(torch.Tensor(acc_history)),
        ))
        acc_history_train.append((torch.mean(torch.Tensor(acc_history))).float())
        loss_history_train.append((torch.mean(torch.Tensor(loss_history))).float())
        loss_history = []
        acc_history = []
        model.eval()
        for img, target in test_data_loader:
            img = Variable(img)
            target = Variable(target)
            if torch.cuda.is_available():
                img = img.cuda()
                target = target.cuda()
            output = model(img)



            acc = calculat_acc(output, target)
            if epoch > min_epoch and acc_best <= acc:
                acc_best = acc
                best_model = copy.deepcopy(model)
            acc_history.append(float(acc))
            loss_history.append(float(loss))
        print('test_loss: {:.4}|test_acc: {:.4}'.format(
            torch.mean(torch.Tensor(loss_history)),
            torch.mean(torch.Tensor(acc_history)),
        ))
        acc_history_test.append((torch.mean(torch.Tensor(acc_history))).float())
        loss_history_test.append((torch.mean(torch.Tensor(loss_history))).float())
        print('epoch: {}|time: {:.4f}'.format(epoch, time.time() - start_))
        print("==============================================")
        torch.save(model.state_dict(), model_path)
        modelbest = best_model
        torch.save(modelbest, './resnet152_mask2.pth')
    # 畫出acc學習曲線
    acc = acc_history_train
    epoches = range(1, len(acc) + 1)
    val_acc = acc_history_test
    plt.plot(epoches, acc, 'b', label='Training acc')
    plt.plot(epoches, val_acc, 'r', label='Validation acc')
    plt.title('Training and validation accuracy')
    plt.legend(loc='lower right')
    plt.grid()
    # 儲存acc學習曲線
    plt.savefig('./acc_ResNet152.png')
    plt.show()

    # 畫出loss學習曲線
    loss = loss_history_train
    val_loss = loss_history_test
    plt.plot(epoches, loss, 'b', label='Training loss')
    plt.plot(epoches, val_loss, 'r', label='Validation loss')
    plt.title('Training and validation loss')
    plt.legend(loc='upper right')
    plt.grid()
    # 儲存loss學習曲線
    plt.savefig('./loss_ResNet152.png')
    plt.show()
if __name__ == "__main__":
    train()
    pass

模型訓練結果

  1. 學習曲線

  2. 準確度

  3. 總結

    • 訓練epoch:40 epoches
    • 訓練總時數:3小時52分鐘
    • callback採紀錄最高test_score
    • test_score:86.16 %

今日小結

  • ResNet152對於我們要的準確度來說,還有一節落差。
  • 明日用DenseNet201去訓練模型,看效果如何!

上一篇
DAY23:優化器(下)
下一篇
DAY25:模型訓練DenseNet201
系列文
資料分析及AI深度學習-簡單基礎實作30
圖片
  直播研討會
圖片
{{ item.channelVendor }} {{ item.webinarstarted }} |
{{ formatDate(item.duration) }}
直播中

尚未有邦友留言

立即登入留言