新闻中心中国常州网 常州第一门户网 常州龙网 常州日报 常州晚报

class paddle.distributed. Shard
百度 还有爆胎的声音。

The Shard describes how Tensor splitted across multiple devices according to specified dimensions.

Parameters

dim (int) – specify the slicing dimension of the tensor.

Examples

>>> import paddle
>>> import paddle.distributed as dist
>>> mesh = dist.ProcessMesh([[2, 4, 5], [0, 1, 3]], dim_names=['x', 'y'])
>>> a = paddle.to_tensor([[1,2,3],[5,6,7]])
>>> 
>>> # distributed tensor
>>> d_tensor = dist.shard_tensor(a, mesh, [dist.Shard(0), dist.Shard(1)])
get_co_shard_order ( self: paddle.base.libpaddle.Shard ) int

get_co_shard_order?

get_dim ( self: paddle.base.libpaddle.Shard ) int

get_dim?

get_split_factor ( self: paddle.base.libpaddle.Shard ) int

get_split_factor?

is_partial ( self: paddle.base.libpaddle.Placement ) bool

is_partial?

is_replicated ( self: paddle.base.libpaddle.Placement ) bool

is_replicated?

is_shard ( self: paddle.base.libpaddle.Placement, dim: Optional[int] = None ) bool

is_shard?

set_split_factor ( self: paddle.base.libpaddle.Shard, arg0: int ) None

set_split_factor?