民营银行需做足“普惠”文章

class paddle.optimizer.lr. LinearWarmup ( learning_rate: float | paddle.optimizer.lr.LRScheduler, warmup_steps: int, start_lr: float, end_lr: float, last_epoch: int = -1, verbose: bool = False ) [source]
百度 国家政府网站中央政府门户网站全国人大常委会办公厅政协全国委员会办公厅最高人民法院最高人民检察院外交部公安部水利部文化部科学技术部劳动和社会保障部建设部民族事务委员会交通部铁道部信息产业部农业部卫生部民政部水利部教育部国家发改委人事部国防科工委商务部司法部财政部国土资源部卫生部人口和计划生育委员会中国人民银行审计署监察部新闻出版总署海关总署质量监督检验检疫总局国家旅游局国家统计局国家体育总局民用航空总局环境保护总局税务总局工商行政管理总局国家版权局国家宗教事物局国务院机关事务管理局广播电影电视总局林业局食品药品监督管理局国家知识产权局安全生产监督管理局国有资产监管委员会三峡工程建设委员会台湾事务办公室西部开发领导小组法制办公室南水北调建设委员会国务院侨务办公室港澳事务办公室国务院发展研究中心气象局社会科学院科学院保险监督管理委员会自然科学基金委员会证券监督管理委员会中国地震局新华通讯社中国工程院国家行政学院银行业监管委员会外汇管理局海洋局中医药管理局国家邮政局航天局外国专家局烟草专卖局粮食局测绘局文物局国家原子能机构档案局中共中央对外联络部国家图书馆机械工业联合会轻工业联合会建筑材料工业协会钢铁工业协会中华全国工商业联合会煤炭工业协会纺织工业协会全国供销合作总社石油和化学工业协会国家信息中心中共中央编译局中华全国总工会共青团中央全国妇女联合会全国青年联合会全国学生联合会归国华侨联合会全国台湾同胞联谊会科学技术协会文学艺术界联合会国际贸易促进委员会中国消费者协会

Linear learning rate warm up strategy. Update the learning rate preliminarily before the normal learning rate scheduler. For more information, please refer to Bag of Tricks for Image Classification with Convolutional Neural Networks

When epoch < warmup_steps, learning rate is updated as:

\[lr = start\_lr + (end\_lr - start\_lr) * \frac{epoch}{warmup\_steps}\]

where start_lr is the initial learning rate, and end_lr is the final learning rate;

When epoch >= warmup_steps, learning rate is updated as:

\[lr = learning_rate\]

where learning_rate is float or any subclass of LRScheduler .

Parameters
  • learning_rate (float|LRScheduler) – The learning rate after warm-up. It is a python float number or any subclass of LRScheduler .

  • warmup_steps (int) – total steps of warm up. It must be a positive integer.

  • start_lr (float) – Initial learning rate of warm up.

  • end_lr (float) – Final learning rate of warm up.

  • last_epoch (int, optional) – The index of last epoch. Can be set to restart training. Default: -1, means initial learning rate.

  • verbose (bool, optional) – If True, prints a message to stdout for each update. Default: False .

Returns

LinearWarmup instance to schedule learning rate.

Examples

>>> # Example1: train on default dynamic graph mode
>>> import paddle
>>> import numpy as np

>>> # train on default dynamic graph mode
>>> linear = paddle.nn.Linear(10, 10)
>>> scheduler = paddle.optimizer.lr.LinearWarmup(
...         learning_rate=0.5, warmup_steps=20, start_lr=0, end_lr=0.5, verbose=True)
>>> sgd = paddle.optimizer.SGD(learning_rate=scheduler, parameters=linear.parameters())
>>> for epoch in range(20):
...     for batch_id in range(5):
...         x = paddle.uniform([10, 10])
...         out = linear(x)
...         loss = paddle.mean(out)
...         loss.backward()
...         sgd.step()
...         sgd.clear_gradients()
...         scheduler.step()    # If you update learning rate each step
...     # scheduler.step()        # If you update learning rate each epoch
>>> # Example2: train on static graph mode
>>> import paddle
>>> import numpy as np
>>> paddle.enable_static()
>>> main_prog = paddle.static.Program()
>>> start_prog = paddle.static.Program()
>>> with paddle.static.program_guard(main_prog, start_prog):
...     x = paddle.static.data(name='x', shape=[None, 4, 5])
...     y = paddle.static.data(name='y', shape=[None, 4, 5])
...     z = paddle.static.nn.fc(x, 100)
...     loss = paddle.mean(z)
...     scheduler = paddle.optimizer.lr.LinearWarmup(
...         learning_rate=0.5, warmup_steps=20, start_lr=0, end_lr=0.5, verbose=True)
...     sgd = paddle.optimizer.SGD(learning_rate=scheduler)
...     sgd.minimize(loss)
...
>>> exe = paddle.static.Executor()
>>> exe.run(start_prog)
>>> for epoch in range(20):
...     for batch_id in range(5):
...         out = exe.run(
...             main_prog,
...             feed={
...                 'x': np.random.randn(3, 4, 5).astype('float32'),
...                 'y': np.random.randn(3, 4, 5).astype('float32')
...             },
...             fetch_list=loss.name)
...         scheduler.step()    # If you update learning rate each step
...     # scheduler.step()        # If you update learning rate each epoch
state_dict ( ) _LRStateDict

state_dict?

Returns the state of the LinearWarmup scheduler as a dict.

It is a subset of self.__dict__ .

set_state_dict ( state_dict: _LRStateDict ) None

set_state_dict?

Loads state_dict for LinearWarmup scheduler.

get_lr ( ) float

get_lr?

For those subclass who overload LRScheduler (Base Class), User should have a custom implementation of get_lr() .

Otherwise, an NotImplementedError exception will be thrown.

set_dict ( state_dict: _LRStateDict ) None

set_dict?

Loads the schedulers state.

state_keys ( ) None

state_keys?

For those subclass who overload LRScheduler (Base Class). Acquiescently, “last_epoch, last_lr” will be saved by self.keys = ['last_epoch', 'last_lr'] .

last_epoch is the current epoch num, and last_lr is the current learning rate.

If you want to change the default behavior, you should have a custom implementation of _state_keys() to redefine self.keys .

step ( epoch: Optional[int] = None ) None

step?

step should be called after optimizer.step . It will update the learning rate in optimizer according to current epoch . The new learning rate will take effect on next optimizer.step .

Parameters

epoch (int, None) – specify current epoch. Default: None. Auto-increment from last_epoch=-1.

Returns

None

Examples

>>> import paddle
>>> value = paddle.arange(26, dtype='float32')
>>> a = paddle.reshape(value, [2, 13])
>>> linear = paddle.nn.Linear(13, 5)
>>> adadelta = paddle.optimizer.Adadelta(learning_rate=0.0003, epsilon=1e-06, rho=0.95,
...                             parameters = linear.parameters())
>>> out = linear(a)
>>> out.backward()
>>> adadelta.step()
>>> adadelta.clear_grad()
>>> import paddle
>>> value = paddle.arange(26, dtype='float32')
>>> a = paddle.reshape(value, [2, 13])
>>> linear = paddle.nn.Linear(13, 5)
>>> adadelta = paddle.optimizer.Adadelta(learning_rate=0.0003, epsilon=1e-06, rho=0.95,
...                             parameters = linear.parameters())
>>> out = linear(a)
>>> out.backward()
>>> adadelta.step()
>>> adadelta.clear_grad()