美文网首页
Pytorch 2021-11-06

Pytorch 2021-11-06

作者: 远方的飞鱼 | 来源:发表于2021-12-01 11:17 被阅读0次

    Data :

    1. API : torch.utils.data.DataLoader , torch.utils.data.Dataset.

    Pytorch offers domain-specific libraries such as TorchText,TorchVision, TorchAudio

    training_data = datasets.FashionMNIST(
    root = ''data'',
    train = True,
    download = True,
    transform = ToTensor(),
    )
    test_data = datasets.FashionMNIST(
    root = "data",
    train = False,
    download = True,
    transform = ToTensor()
    )

    We pass the Dataset as an argument to DataLoader. This wraps an iterable over our dataset, and supports automatic batching, sampling, shuffling and multiprocess data loading. Here we define a batch size of 64, i.e. each element in the dataloader iterable will return a batch of 64 features and labels.

    batch_size = 64
    train_dataloader = DataLoader(training_data, batch_size = batch_size)
    test_dataloader = DataLoader(test_data, batch_size = batch_size)

    for X,y in test_dataloader:
    print("Shape of X [N,C,H,W]: ", X.shape)
    print("Shape of y : ", y.shape , y.dtype)
    break

    Creating Models

    Get cpu or gpu device for training.

    device = "cuda" if torch.cuda.is_available() else "cpu"
    print("Using {} device".format(device))

    define model

    class NeuralNetwork(nn.Module):
    def init(self):
    super(NeuralNetwork, self).init()
    self.flatten = nn.Flatten()
    self.linear_relu_stack = nn.Sequential(
    nn.Linear(28 * 28 , 512),
    nn.ReLU(),
    nn.Linear(512,512),
    nn.ReLU(),
    nn.Linear(512,10)
    )
    def forward(self,x):
    x = self.flatten(x)
    logits = self.linear_relu_stack(x)
    return logits

    model = NeuralNetwork().to(device)
    print(model)

    Out:

    Using cuda device
    NeuralNetwork(
    (flatten): Flatten(start_dim=1, end_dim=-1)
    (linear_relu_stack): Sequential(
    (0): Linear(in_features=784, out_features=512, bias=True)
    (1): ReLU()
    (2): Linear(in_features=512, out_features=512, bias=True)
    (3): ReLU()
    (4): Linear(in_features=512, out_features=10, bias=True)
    )
    )

    Optimizing the Model Parameters

    To train a model , we need a loss function and an optimizer.
    loss_fn = nn.CrossEntropyLoss()
    optimizer = torch.optim.SGD(model.parameters(),lr = le-3)

    In a single training loop , the method makes predictions on the training dataset(fed to it in batches), and backpropagateds the prediction error to adjust the model's parameters.

    def train(dataloader, model, loss_fn, optimizer):
    size = len(dataloader.dataset)
    model.train()
    for batch,(X,y) in enumerate(dataloader):
    X,y = X.to(device), y.to(device)
    pred = model(X)
    loss = loss_fn(pred,y)

           #Backpropagation
           optimizer.zero_grad()
           loss.backward()
           optimizer.step()
           if batch % 100 == 0:
               loss, current = loss.item(), batch*len(X)
               print(f " loss:{loss: >7f }   [ {current:>5d } / {size:> 5d}]")
    

    def test(dataloader,model,loss_fn):
    size = len(dataloader.dataset)
    num_batches = len(dataloader)
    model.eval()
    test_loss,correct = 0,0
    with torch.no_grad():
    for X,y in dataloader:
    X,y = X.to(device),y.to(device)
    pred = model(X)
    test_loss += loss_fn(pred,y).item()
    correct += (pred.argmax(1) == y).type(torch.float).sum().item()
    test_loss /= num_batches
    correct /= size
    print(f"Test Error :\n Accuary:{(100* correct):. 0.1f}%, Avg loss:{ test_loss :> 8f } \n")

    The training process is conducted over several iterations (epochs). During each epoch, the model learns parameters to make better predictions. We print the model’s accuracy and loss at each epoch; we’d like to see the accuracy increase and the loss decrease with every epoch.

    epochs = 5
    for t in range(epochs):
    print(f"Epoch {t+1}\n-------------------------------")
    train(train_dataloader, model, loss_fn, optimizer)
    test(test_dataloader, model, loss_fn)
    print("Done!")

    Saving Models
    A common way to save a model is to serialize the internal state dictionary (containing the model parameters).

    torch.save(model.state_dict()," model.pth")
    print("Saved PyTorch Model State to model.pth ")

    Loading Models
    model= NeuralNetwork()
    model.load_sate_dict(torch.load("model.pth"))

    classes = [
    "T-shirt/top",
    "Trouser",
    "Pullover",
    "Dress",
    "Coat",
    "Sandal",
    "Shirt",
    "Sneaker",
    "Bag",
    "Ankle boot",
    ]
    model.eval()
    x,y = test_data[0][0], test_data[0][1]
    with torch.no_grad():
    pred = model(x)
    predicted,actual = classes[pred[0].argmax(0)],classes[y]
    print(f"Pedicted:"{predicted}", Actual:" {actual}" ')

    相关文章

      网友评论

          本文标题:Pytorch 2021-11-06

          本文链接:https://www.haomeiwen.com/subject/zbzdzltx.html