tf.train.exponential_decayÊÇtensflow1.X°æ±¾µÄ2.°æ±¾Ê¹ÓÃÒÔÏÂÓï¾ä
tf.compat.v1.train.exponential_decay
½«Ö¸ÊýË¥¼õÓ¦ÓÃÓÚѧϰÂÊ¡£
tf.compat.v1.train.exponential_decay(learning_rate, global_step, decay_steps, decay_rate, staircase=
ѵÁ·Ä£ÐÍʱ£¬Í¨³£½¨ÒéËæ×ÅѵÁ·µÄ½øÐнµµÍѧϰÂÊ¡£´Ëº¯Êý½«Ö¸ÊýË¥¼õº¯ÊýÓ¦ÓÃÓÚÌṩµÄ³õʼѧϰÂÊ¡£ËüÐèÒªÒ»¸öglobal_step
ÖµÀ´¼ÆËãË¥¼õµÄѧϰÂÊ¡£ÄúÖ»Ðè´«µÝÒ»¸öTensorFlow±äÁ¿£¬¼´¿ÉÔÚÿ¸öѵÁ·²½ÖèÖÐÔö¼Ó¸Ã±äÁ¿¡£
¸Ãº¯Êý·µ»ØË¥¼õµÄѧϰÂÊ¡£¼ÆË㹫ʽΪ£º
decayed_learning_rate = learning_rate *decay_rate ^ (global_step / decay_steps)
Èç¹û²ÎÊýstaircase
ΪTrue
£¬global_step / decay_steps
ÔòΪÕûÊý³ý·¨£¬²¢ÇÒË¥¼õµÄѧϰÂÊ×ñѽ×Ìݺ¯Êý¡£
ʾÀý£ºÒÔ0.96Ϊµ×£¬Ã¿100000²½Ë¥¼õÒ»´Î£º
...
global_step = tf.Variable(0, trainable=False)
starter_learning_rate = 0.1
learning_rate = tf.compat.v1.train.exponential_decay(starter_learning_rate,
global_step,100000, 0.96, staircase=True)
# Passing global_step to minimize() will increment it at each step.
learning_step = (tf.compat.v1.train.GradientDescentOptimizer(learning_rate).minimize(...my loss..., global_step=global_step)
)
ARGS
learning_rate |
±êÁ¿float32 »òfloat64 Tensor PythonÊý¡£³õʼѧϰÂÊ¡£ |
global_step |
±êÁ¿int32 »òint64 Tensor PythonÊý¡£ÓÃÓÚË¥¼õ¼ÆËãµÄÈ«¾Ö²½Öè¡£²»ÄÜΪ¸º¡£ |
decay_steps |
±êÁ¿int32 »òint64 Tensor PythonÊý¡£±ØÐëÊÇ»ý¼«µÄ¡£²Î¼ûÉÏÃæµÄË¥¼õ¼ÆËã¡£ |
decay_rate |
±êÁ¿float32 »òfloat64 Tensor PythonÊý¡£Ë¥¼õÂÊ¡£ |
staircase |
²¼¶ûÖµ¡£Èç¹ûTrue ÒÔÀëÉ¢¼ä¸ôË¥¼õѧϰÂÊ |
name |
´®¡£²Ù×÷µÄ¿ÉÑ¡Ãû³Æ¡£Ä¬ÈÏΪ'ExponentialDecay'¡£ |
return |
|
---|---|
Tensor ÓëÀàÐÍÏàͬ µÄ±êÁ¿¡£Ñ§Ï°ÂÊϽµ¡£ learning_rate |
Raises
ValueError |
Èç¹ûδÌṩ¡£ global_step |