rufus(自启动u盘制作工具) V2.13.1081官方绿色版

class paddle.nn. AlphaDropout ( p: float = 0.5, name: Optional[str] = None ) [source]
百度 二、做法建设“法治杭州”工作将紧紧围绕人文法治示范区建设目标,着眼于抓基层、强基础、利长远、惠民生,努力推进“法治杭州”体制机制的改革创新。

Alpha Dropout is a type of Dropout that maintains the self-normalizing property. For an input with zero mean and unit standard deviation, the output of Alpha Dropout maintains the original mean and standard deviation of the input. Alpha Dropout fits well to SELU activate function by randomly setting activations to the negative saturation value.

For more information, please refer to: Self-Normalizing Neural Networks

In dygraph mode, please use eval() to switch to evaluation mode, where dropout is disabled.

Parameters
  • p (float|int, optional) – Probability of setting units to zero. Default: 0.5

  • name (str|None, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Shape:
  • input: N-D tensor.

  • output: N-D tensor, the same shape as input.

Examples

>>> import paddle
>>> paddle.seed(2023)

>>> x = paddle.to_tensor([[-1, 1], [-1, 1]], dtype="float32")
>>> m = paddle.nn.AlphaDropout(p=0.5)
>>> y_train = m(x)
>>> print(y_train)
Tensor(shape=[2, 2], dtype=float32, place=Place(cpu), stop_gradient=True,
[[-0.10721093,  1.66559887],
 [-0.77919382,  1.66559887]])

>>> m.eval()  # switch the model to test phase
>>> y_test = m(x)
>>> print(y_test)
Tensor(shape=[2, 2], dtype=float32, place=Place(cpu), stop_gradient=True,
[[-1.,  1.],
 [-1.,  1.]])
forward ( input: Tensor ) Tensor

forward?

Defines the computation performed at every call. Should be overridden by all subclasses.

Parameters
  • *inputs (tuple) – unpacked tuple arguments

  • **kwargs (dict) – unpacked dict arguments

extra_repr ( ) str

extra_repr?

Extra representation of this layer, you can have custom implementation of your own layer.