- 在前面我們學習過了 Pytorch 的基礎用法,今天我們來正式依照 Pytorch Model Class 的撰寫規則,正式來撰寫一個 Linear Regression Model
- 上 Code~
Linear Regression Classes
- Linear Regression 的整個架構會由幾個東西組成?很簡單,就是一個 linear function
- 那 Pytorch Model 的寫法會怎麼去寫?
class LinearRegression(nn.Module):
def __init__(self, input_dim, output_dim):
super(LinearRegression, self).__init__()
# define layers
self.linear = nn.Linear(input_dim, output_dim)
def forward(self, x):
return self.linear(x)
- 那為什麼要寫成這樣子呢?一般類神經網路是一個龐雜的結構,會有許多不同的 layers,我們一般會在
__init__
裡面定義了所有會使用的 layer functions,並且實際傳遞的方式會由 forward()
裡面定義了類神經網路的結構和傳遞狀況,因此才會這樣撰寫,我們會在之後的 CNN 和基礎類神經網路的示範中示範到
Linear Regression Example
import torch
import torch.nn as nn
import numpy as np
from torch.optim import optimizer
from sklearn import datasets
import matplotlib.pyplot as plt
# 0) prepare data
feature_numpy, target_numpy = datasets.make_regression(n_samples=100, n_features=1, noise=20, random_state=1234)
feature = torch.from_numpy(feature_numpy.astype(np.float32))
target = torch.from_numpy(target_numpy.astype(np.float32))
target = target.view(target.shape[0], 1)
n_samples, n_features = feature.shape
# 1) model
class LinearRegression(nn.Module):
def __init__(self, input_dim, output_dim):
super(LinearRegression, self).__init__()
# define layers
self.linear = nn.Linear(input_dim, output_dim)
def forward(self, x):
return self.linear(x)
model = LinearRegression(n_features, 1)
# 2) loss and optimizer
learning_rate = 0.01
criterion = nn.MSELoss()
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
# 3) training loop
epochs = 100
for epoch in range(epochs):
# forward pass and loss
y_predicted = model(feature)
loss = criterion(y_predicted, target)
# backward pass
loss.backward()
# update
optimizer.step()
# init optimizer
optimizer.zero_grad()
if (epoch + 1) % 10 == 0:
print(f'epoch: {epoch+1}, loss = {loss.item(): .4f}')
# show in image
predicted = model(feature).detach().numpy()
plt.plot(feature_numpy, target_numpy, 'ro')
plt.plot(feature_numpy, predicted, 'b')
plt.show()
outputs
epoch: 10, loss = 5616.6792
epoch: 20, loss = 3864.4285
epoch: 30, loss = 2691.9751
epoch: 40, loss = 1907.3096
epoch: 50, loss = 1382.0626
epoch: 60, loss = 1030.3954
epoch: 70, loss = 794.8961
epoch: 80, loss = 637.1581
epoch: 90, loss = 531.4833
epoch: 100, loss = 460.6736
每日小結
- 現在會感覺寫成 Class 是在多此一舉,但是在後面的範例中應該就會感受到這樣寫的重要性,今天算是一個程式進化的過程,讓大家感受一下從完全沒有 Pytorch 套件的純 python code ,到慢慢利用 Pytorch 解決每一個環節的問題,到最後完整 Pytorch 撰寫的方式,這裡就是希望大家感受一下使用 Pytorch 真的可以幫助使用者更專注在重要的結構上,而非基礎的數學問題
- 明天我們在來看看之前寫過的 Logistic Regression 改寫成 Pytorch 的過程