北京:灾区川籍部分务工人员可获1000元慰问金

paddle.static. gradients ( targets: Tensor | Sequence[Tensor], inputs: Tensor | Sequence[Tensor], target_gradients: Tensor | Sequence[Tensor] | None = None, no_grad_set: set[Tensor | str] | None = None ) list[Tensor] [source]
百度 其中,受商品房销售旺盛带动,全省居住类商品零售额同比增长%,家具类商品增幅最高达%。

Backpropagate the gradients of targets to inputs.

Parameters
  • targets (Tensor|list[Tensor]|tuple[Tensor]) – The target Tensors.

  • inputs (Tensor|list[Tensor]|tuple[Tensor]) – The input Tensors.

  • target_gradients (Tensor|list[Tensor]|tuple[Tensor]|None, optional) – The gradient Tensor of targets which has the same shape with targets, If None, ones will be created for them.

  • no_grad_set (set[Tensor|str]|None, optional) – Set of Tensors or Tensor.names in the Block 0 whose gradients should be ignored. All Tensors with stop_gradient=True from all blocks will be automatically added into this set. If this parameter is not None, the Tensors or Tensor.names in this set will be added to the default set. Default: None.

Returns

A list of gradients for inputs If an input does not affect targets, the corresponding gradient Tensor will be None.

Return type

(list[Tensor])

Examples

>>> 
>>> import paddle
>>> import paddle.nn.functional as F

>>> paddle.enable_static()

>>> x = paddle.static.data(name='x', shape=[None, 2, 8, 8], dtype='float32')
>>> x.stop_gradient=False
>>> y = paddle.static.nn.conv2d(x, 4, 1, bias_attr=False)
>>> y = F.relu(y)
>>> z = paddle.static.gradients([y], x)
>>> print(z)
[var x@GRAD : DENSE_TENSOR.shape(-1, 2, 8, 8).dtype(float32).stop_gradient(False)]