iT邦幫忙

2022 iThome 鐵人賽

DAY 10
0
AI & Data

小AI大麻煩系列 第 10

【Day10】Pytorch Recurrent Neural Network-1

  • 分享至 

  • xImage
  •  

Reference


前言

第二次交手RNN 的模型了,上次因為專案需要直接拿 Model 起來改,對模型的架構及原理幾乎是完全不理解,面試的過程中不少面試官會問到LSTM 跟 GRU 的差異及運作原理,我就陣亡了。

### 首先第一段就是 import

Dash 來呼叫 torch_import

Hyperparameter 的部分


input_dim = 28
hidden_dim = 256
num_layers = 2
output_class = 10
sequence_length = 28
learning_rate = 0.005
batch_size = 64
num_epochs = 3

然後記得 加入 device
Dash 來呼叫 torch_device

載入資料集

Dash 來呼叫 torch_MNIST

來寫 LSTM

再來就是處理 LSTM 的架構,而已下範例為 many to one

class RNN_LSTM(nn.Module):
    def __init__(self, input_dim, hidden_dim, num_layers, output_class):
        super(RNN_LSTM, self).__init__()
        self.hidden_dim = hidden_dim
        self.num_layers = num_layers
        self.lstm = nn.LSTM(input_dim, hidden_dim, num_layers, batch_first=True)
        self.fc = nn.Linear(hidden_dim * sequence_length, output_class)

    def forward(self, x):
        # 設定hidden_state 初始的參數,可以使用 zeros / randn
        # LSTM 需要多一個 cell states 
        h0 = torch.zeros(self.num_layers, x.size(0), self.hidden_dim).to(device)
        c0 = torch.zeros(self.num_layers, x.size(0), self.hidden_dim).to(device)

        out, _ = self.lstm(
            x, (h0, c0)
        )  # out: tensor of shape (batch_size, seq_length, hidden_dim)
        out = out.reshape(out.shape[0], -1)
        
		# 所以上面的 nn.Linear input shape才會是 hiedden_dim*sequence_length
        out = self.fc(out)
        return out

趁現在還有記憶的時候快點紀錄
hidden_state(h0) : LSTM GRU RNN都會使用到,用來記錄 cell 運算的結果
cell_state(c0): LSTM在儲存memory cell的值,會傳達到下一個cell,但如果forget gate 為0的話,當前的 cell 就不會影響到這個數值

訓練LSTM

New 一個 LSTM

model = RNN_LSTM(input_size, hidden_size, num_layers, num_classes).to(device)

Train 一下 LSTM

for epoch in range(num_epochs):
    for batch_idx, (data, targets) in enumerate(tqdm(train_loader)):

        # 這邊注意原本是 (64,1,28,28)變成(64,28,28)
        data = data.to(device=device).squeeze(1)
        targets = targets.to(device=device)

        # forward
        scores = model(data)
        loss = criterion(scores, targets)

        # backward
        optimizer.zero_grad()
        loss.backward()

        # gradient descent update step/adam step
        optimizer.step()

評估LSTM

def check_accuracy(loader, model):

    num_correct = 0
    num_samples = 0
    model.eval()
    
    with torch.no_grad():

        for x, y in loader:
            ### 訓練的時候有使用squeeze(1),這邊也要跟上
            x = x.to(device=device).squeeze(1)
            y = y.to(device=device)
            scores = model(x)
            _, predictions = scores.max(1)
            num_correct += (predictions == y).sum()
            num_samples += predictions.size(0)


    model.train()
    return num_correct / num_samples

print(f"Accuracy on training set: {check_accuracy(train_loader, model)*100:2f}")
print(f"Accuracy on test set: {check_accuracy(test_loader, model)*100:.2f}")

幫自己挖一個坑,之後有時間請好好研究 self-attention 機制


Reference


上一篇
【Day9】Pytorch Convolutional Neural Network
下一篇
【Day11】Pytorch Recurrent Neural Network-2
系列文
小AI大麻煩30
圖片
  直播研討會
圖片
{{ item.channelVendor }} {{ item.webinarstarted }} |
{{ formatDate(item.duration) }}
直播中

尚未有邦友留言

立即登入留言