1. Preparation

安装

pip install torch==0.3.1 --user

查看版本

import torch
print(torch.__version__)

1.1. Dataloader

DataLoader(dataset, batch_size=1, shuffle=False, sampler=None,
           batch_sampler=None, num_workers=0, collate_fn=None,
           pin_memory=False, drop_last=False, timeout=0,
           worker_init_fn=None, *, prefetch_factor=2,
           persistent_workers=False)

1.2. 矩阵相关

numpy转tensor:

torch.Tensor()

扩充维度:

x1 = torch.zeros(10, 10)
x2 = x1.unsqueeze(0)
>>> print(x2.size())
torch.Size([1, 10, 10])

变形

imgs = image.view(-1,1,256,256).repeat(1,3,1,1)

1.3. 上采样

image_tensor = image_tensor.view(1, 1, img_d, img_h, img_w)
resize_tensor = F.upsample(image_tensor, new_shape, mode='trilinear').data[0, 0]
torch.max(input, axis) #return the max value

torch.flatten(input, start_dim, end_dim)# flatten a continuous rang of dims in a tensor

1.4. 其他

1.5. Debug

查看feature map大小

import torch
from torch.autograd import Variable

fms = model(Variable(torch.randn(1,1,256,256)))
for fm in fms:
    print(fm.size())

1.6. 模型

1.6.1. 加载模型

针对invalid device ordinal错误,解决方案是map_location

checkpoint = torch.load(self.model_path, map_location=lambda storage, loc: storage)
self.model.load_state_dict(checkpoint['model'], strict=False)

针对python2到python3的迁移:

from functools import partial
import pickle
pickle.load = partial(pickle.load, encoding="latin1")
pickle.Unpickler = partial(pickle.Unpickler, encoding="latin1")
model = torch.load(model_file, map_location=lambda storage, loc: storage, pickle_module=pickle)

1.6.2. test model

记得设置 volatile=True,否则容易爆显存。

input_img_var = torch.autograd.Variable(images.cuda(), volatile=True)
input_mask_var = torch.autograd.Variable(masks.cuda(), volatile=True)

1.7. cuda memory

设置GPU的方法

torch.cuda.set_device(7)
model.cuda() #RAM + 0.9G

如果输入图片大小一致:

torch.backends.cudnn.benchmark = True

僵尸进程占用gpu,kill的方法:

ps x |grep python|awk '{print $1}'|xargs kill

Memory Leakage with PyTorch

1.8. 资源

2. Pytorch lightning

2.1. pl.LightningModule

def configure_optimizers(self):
    #     # Make sure to filter the parameters based on `requires_grad`
    optimizer = torch.optim.SGD(filter(lambda p: p.requires_grad, self.parameters()), lr=self.hparams.lr, momentum=0.9)
    lr_scheduler = {
         'scheduler': torch.optim.lr_scheduler.ExponentialLR(optimizer, gamma=0.9995),
         'name': 'lr'
         }
    return [optimizer], [lr_scheduler]

results matching ""

    No results matching ""