美文网首页
Pytorch API

Pytorch API

作者: leon_tly | 来源:发表于2022-07-13 00:21 被阅读0次

    torch.max

    function:: max(input, dim, keepdim=False, *, out=None) -> (Tensor, LongTensor)

    • Args:
      input (Tensor): the input tensor. 输入张量
      dim (int): the dimension to reduce. 要减少的维度
      keepdim (bool): whether the output tensor has :attr:dim retained or not. Default: False. 输出张量是否保持和输入一样的维度,默认为False
    • Keyword args:
      out (tuple, optional): the result tuple of two output tensors (max, max_indices)
    • Example
    >>> import torch
    >>> import torch
    >>> 
    >>> y = torch.rand(2,3)
    >>> print(y)
    tensor([[0.0460, 0.0637, 0.1941],
            [0.1923, 0.4996, 0.7444]])
    >>> print(torch.max(y,0))
    torch.return_types.max(
    values=tensor([0.1923, 0.4996, 0.7444]),
    indices=tensor([1, 1, 1]))
    
    

    理解:将指定维度的每一个元素的对应位置进行比较,得到一个新的tensor。
    找到指定维度的元素可以数括号,第一个括号内包括的元素即为0维度,以此类推。

    tensor([[0.0460, 0.0637, 0.1941],
            [0.1923, 0.4996, 0.7444]])
    # 第0个维度中的每个元素分别为[0.0460, 0.0637, 0.1941],  [0.1923, 0.4996, 0.7444]. 对应位置进行比较,
    # 得到的新tensor([0.1923, 0.4996, 0.7444]),
    #  indices指明新tensor每一个元素所在原来tensor的位置, [0.1923, 0.4996, 0.7444]在指定0维度上,原先的索引均在1号元素上。
    

    function:: max(input, other, *, out=None) -> Tensor
    See torch.maximum

    torch.maximum

    • Args:
      input (Tensor): the input tensor. 输入tensor
      other (Tensor): the second input tensor 第二个输入tensor
    • Keyword args:
      out (Tensor, optional): the output tensor.
    • Example
    # 相同结构
    >>> x = torch.rand(2,3)
    >>> y = torch.rand(2,3)
    >>> torch.maximum(x, y)
    tensor([[0.5473, 0.6836, 0.9704],
            [0.9468, 0.8070, 0.7701]])
    >>> x
    tensor([[0.5473, 0.2766, 0.3458],
            [0.9468, 0.4389, 0.7701]])
    >>> y
    tensor([[0.4181, 0.6836, 0.9704],
            [0.2990, 0.8070, 0.0956]])
    
    
    # 不同结构按照维度少的数据进行扩展广播
    >>> x = torch.rand(1,3)
    >>> x
    tensor([[0.1832, 0.3024, 0.9711]])
    >>> y = torch.rand(2,3)
    >>> y
    tensor([[0.6023, 0.9995, 0.2570],
            [0.3201, 0.6395, 0.6141]])
    >>> torch.maximum(x,y)
    tensor([[0.6023, 0.9995, 0.9711],
            [0.3201, 0.6395, 0.9711]])
    

    相同结构:直接取对应位置的最大值
    不同结构:按照维度少的数据进行扩展广播,再取对应位置的最大值

    torch.cat

    function:: cat(tensors, dim=0, *, out=None) -> Tensor

    • Args:
      tensors (sequence of Tensors): any python sequence of tensors of the same type.
      Non-empty tensors provided must have the same shape, except in the
      cat dimension. 除了cat的维度,其它维度必须匹配
      dim (int, optional): the dimension over which the tensors are concatenated

    • Keyword args:
      out (Tensor, optional): the output tensor.

    • Example::

    >>> x = torch.randn(2, 3)
    >>> x
    tensor([[ 0.6580, -1.0969, -0.4614],
             [-0.1034, -0.5790,  0.1497]])
    >>> torch.cat((x, x, x), 0)
    tensor([[ 0.6580, -1.0969, -0.4614],
            [-0.1034, -0.5790,  0.1497],
            [ 0.6580, -1.0969, -0.4614],
            [-0.1034, -0.5790,  0.1497],
            [ 0.6580, -1.0969, -0.4614],
            [-0.1034, -0.5790,  0.1497]])
    >>> torch.cat((x, x, x), 1)
           tensor([[ 0.6580, -1.0969, -0.4614,  0.6580, -1.0969, -0.4614,  0.6580, -1.0969, -0.4614],
                   [-0.1034, -0.5790,  0.1497, -0.1034, -0.5790,  0.1497, -0.1034, -0.5790,  0.1497]])
    

    理解:在第0个维度cat, 即新增加的元素都属于第0个维度
    在第一个维度cat,按照对应位置,添加到第1个维度上面。
    除了cat的维度,其它维度必须匹配。

    • Example
    >>> x = torch.rand(2,3)
    >>> x
    tensor([[0.1035, 0.8591, 0.2701],
            [0.4790, 0.7983, 0.5008]])
    >>> y = torch.rand(1,3)
    >>> y
    tensor([[0.1822, 0.2422, 0.7784]])
    >>> torch.cat((x,y),1)
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
    RuntimeError: Sizes of tensors must match except in dimension 1. Got 2 and 1 in dimension 0 (The offending index is 1)
    >>> torch.cat((x,y),0)
    tensor([[0.1035, 0.8591, 0.2701],
            [0.4790, 0.7983, 0.5008],
            [0.1822, 0.2422, 0.7784]])
    >>> y = torch.rand(2,3)
    >>> torch.cat((x,y),1)
    tensor([[0.1035, 0.8591, 0.2701, 0.0820, 0.7908, 0.1364],
            [0.4790, 0.7983, 0.5008, 0.9558, 0.9075, 0.1516]])
    >>> y = torch.rand(2,3)
    >>> x = torch.rand(2,1)
    >>> torch.cat((x,y),1)
    tensor([[0.3792, 0.1308, 0.6855, 0.9484],
            [0.5181, 0.0119, 0.7662, 0.1268]])
    >>> 
    

    torch.stack

    function:: stack(tensors, dim=0, *, out=None) -> Tensor
    新增一个维度,所有的tensor必须拥有相同的size

    • Arguments:
      tensors (sequence of Tensors): sequence of tensors to concatenate
      dim (int): dimension to insert. Has to be between 0 and the number
      of dimensions of concatenated tensors (inclusive)

    • Keyword args:
      out (Tensor, optional): the output tensor.

    • Example

    >>> t = torch.rand(2,3)
    >>> t
    tensor([[0.9938, 0.7562, 0.4844],
            [0.3777, 0.4943, 0.7704]])
    >>> torch.stack((t,t), 0)
    tensor([[[0.9938, 0.7562, 0.4844],
             [0.3777, 0.4943, 0.7704]],
    
            [[0.9938, 0.7562, 0.4844],
             [0.3777, 0.4943, 0.7704]]])
    

    理解:在需要stack的维度上再增加一个维度,并在这个维度上拼接数据


    stack.png
    • torch.cat的不同之处
    1. cat 在该维度的添加元素,不改变原来的维度
      stack在该维度的所有元素上添加一个维度,并在新维度上拼接数据。改变了原来的维度
    2. cat 在需要拼接的维度上,数量可以不一致,stack需要保证size相同

    torch.sum

    function:: sum(input, *, dtype=None) -> Tensor
    Returns the sum of all elements in the :attr:input tensor.

    • Args:
      input (Tensor): the input tensor. 输入tensor

    • Keyword args:
      dtype (:class:torch.dtype, optional): the desired data type of returned tensor.
      If specified, the input tensor is casted to :attr:dtype before the operation
      is performed. This is useful for preventing data type overflows. Default: None.

    • Example::

    >>> a = torch.randn(1, 3)
    >>> a
    tensor([[ 0.1133, -0.9567,  0.2958]])
    >>> torch.sum(a)
    tensor(-0.5475)
    

    function:: sum(input, dim, keepdim=False, *, dtype=None) -> Tensor

    Returns the sum of each row of the :attr:input tensor in the given
    dimension :attr:dim. If :attr:dim is a list of dimensions,
    reduce over all of them.

    If :attr:keepdim is True, the output tensor is of the same size
    as :attr:input except in the dimension(s) :attr:dim where it is of size 1.
    Otherwise, :attr:dim is squeezed (see :func:torch.squeeze), resulting in the
    output tensor having 1 (or len(dim)) fewer dimension(s).

    • Args:
      input (Tensor): the input tensor. 输入tensor
      dim (int or tuple of ints): the dimension or dimensions to reduce. 指定维度,可以指定多个维度,即依次在指定维度上求和
      keepdim (bool): whether the output tensor has :attr:dim retained or not. 输出是否保存维度

    • Keyword args:
      dtype (:class:torch.dtype, optional): the desired data type of returned tensor.
      If specified, the input tensor is casted to :attr:dtype before the operation
      is performed. This is useful for preventing data type overflows. Default: None.

    Example::

    >>> a = torch.randn(4, 4)
    >>> a
    tensor([[ 0.0569, -0.2475,  0.0737, -0.3429],
            [-0.2993,  0.9138,  0.9337, -1.6864],
            [ 0.1132,  0.7892, -0.1003,  0.5688],
            [ 0.3637, -0.9906, -0.4752, -1.5197]])
    >>> torch.sum(a, 1)
    tensor([-0.4598, -0.1381,  1.3708, -2.6217])
    >>> b = torch.arange(4 * 5 * 6).view(4, 5, 6)
    >>> torch.sum(b, (2, 1))
    tensor([  435.,  1335.,  2235.,  3135.])
    # 原先维度为[0,1,2] ,size 为 [4,5,6]
    # 维度为2上求和,size变为[4,5]q
    # 在维度为1上求和,size变为[4]
    

    理解:在指定维度上进行相加,如以上例子,在0维度上,向量相加,在1维度上,元素相加。

    torch.squeeze

    squeeze(...)
    squeeze(input, dim=None, *, out=None) -> Tensor

    Returns a tensor with all the dimensions of :attr:input of size 1 removed. 返回一个维度为1的被移除的张量

    For example, if input is of shape:
    :math:(A \times 1 \times B \times C \times 1 \times D) then the out tensor
    will be of shape: :math:(A \times B \times C \times D).

    When :attr:dim is given, a squeeze operation is done only in the given
    dimension. If input is of shape: :math:(A \times 1 \times B),
    squeeze(input, 0) leaves the tensor unchanged, but squeeze(input, 1)
    will squeeze the tensor to the shape :math:(A \times B).

    .. note:: The returned tensor shares the storage with the input tensor,
    so changing the contents of one will change the contents of the other.
    返回的tensor和原始tensor共享内存,改变其中一个tensor,也会改变另一个tensor

    .. warning:: If the tensor has a batch dimension of size 1, then squeeze(input)
    will also remove the batch dimension, which can lead to unexpected
    errors.
    在深度学习中,数据会有一个batch维度,如果batch维度为1,使用squeeze,可能会造成batch维度丢失。

    Args:
    input (Tensor): the input tensor. 输入tensor
    dim (int, optional): if given, the input will be squeezed only in this dimension 指定维度,如果指定,只会作用到给定的维度

    • Keyword args:
      out (Tensor, optional): the output tensor.

    • Example::

    >>> x = torch.zeros(2, 1, 2, 1, 2)
    >>> x.size()
    torch.Size([2, 1, 2, 1, 2])
    >>> y = torch.squeeze(x)
    >>> y.size()
    torch.Size([2, 2, 2])
    >>> y = torch.squeeze(x, 0)
    >>> y.size()
    torch.Size([2, 1, 2, 1, 2])
    >>> y = torch.squeeze(x, 1)
    >>> y.size()
    torch.Size([2, 2, 1, 2])
    

    torch.unsqueeze

    unsqueeze(...)
    unsqueeze(input, dim) -> Tensor

    Returns a new tensor with a dimension of size one inserted at the
    specified position. 返回一个新张量,插入一个在指定的位置。

    The returned tensor shares the same underlying data with this tensor.

    A :attr:dim value within the range [-input.dim() - 1, input.dim() + 1)
    can be used. Negative :attr:dim will correspond to :meth:unsqueeze
    applied at :attr:dim = dim + input.dim() + 1.

    • Args:
      input (Tensor): the input tensor. 输入tensor
      dim (int): the index at which to insert the singleton dimension 插入单个维度的缩影

    • Example::

    >>> x = torch.tensor([1, 2, 3, 4])
    >>> torch.unsqueeze(x, 0)
    tensor([[ 1,  2,  3,  4]])
    >>> torch.unsqueeze(x, 1)
    tensor([[ 1],
            [ 2],
            [ 3],
            [ 4]])
    

    torch.transpose

    transpose(...)
    transpose(input, dim0, dim1) -> Tensor

    Returns a tensor that is a transposed version of :attr:input.
    The given dimensions :attr:dim0 and :attr:dim1 are swapped.

    The resulting :attr:out tensor shares it's underlying storage with the
    :attr:input tensor, so changing the content of one would change the content
    of the other. 输入张量和输出张量共享内存,所以改变一个就会改变另一个

    • Args:
      input (Tensor): the input tensor. 输入tensor
      dim0 (int): the first dimension to be transposed 第一个要被转置的维度
      dim1 (int): the second dimension to be transposed 第二个要被转置的维度

    • Example::

    >>> x = torch.randn(2, 3)
    >>> x
    tensor([[ 1.0028, -0.9893,  0.5809],
            [-0.1669,  0.7299,  0.4942]])
    >>> torch.transpose(x, 0, 1)
    tensor([[ 1.0028, -0.1669],
            [-0.9893,  0.7299],
            [ 0.5809,  0.4942]])
    

    理解:对于二维矩阵而言,transpose就是转置操作。高维矩阵,transpose可以理解为按照索引的方式进行数据交换。
    理解方式可以参照numpy的转置----回答二
    numpy中多维数组的转置原理是什么?看书直接搞不懂啊!? - 知乎 (zhihu.com)

    相关文章

      网友评论

          本文标题:Pytorch API

          本文链接:https://www.haomeiwen.com/subject/fyinbrtx.html