µ±Ç°Î»Ö㺠´úÂëÃÔ >> ×ÛºÏ >> tensflowѧϰС֪ʶtf.train.exponential_decay
  Ïêϸ½â¾ö·½°¸

tensflowѧϰС֪ʶtf.train.exponential_decay

Èȶȣº54   ·¢²¼Ê±¼ä£º2024-01-19 11:17:42.0

tf.train.exponential_decayÊÇtensflow1.X°æ±¾µÄ2.°æ±¾Ê¹ÓÃÒÔÏÂÓï¾ä

tf.compat.v1.train.exponential_decay

½«Ö¸ÊýË¥¼õÓ¦ÓÃÓÚѧϰÂÊ¡£

tf.compat.v1.train.exponential_decay(learning_rate, global_step, decay_steps, decay_rate, staircase=

ѵÁ·Ä£ÐÍʱ£¬Í¨³£½¨ÒéËæ×ÅѵÁ·µÄ½øÐнµµÍѧϰÂÊ¡£´Ëº¯Êý½«Ö¸ÊýË¥¼õº¯ÊýÓ¦ÓÃÓÚÌṩµÄ³õʼѧϰÂÊ¡£ËüÐèÒªÒ»¸öglobal_stepÖµÀ´¼ÆËãË¥¼õµÄѧϰÂÊ¡£ÄúÖ»Ðè´«µÝÒ»¸öTensorFlow±äÁ¿£¬¼´¿ÉÔÚÿ¸öѵÁ·²½ÖèÖÐÔö¼Ó¸Ã±äÁ¿¡£

¸Ãº¯Êý·µ»ØË¥¼õµÄѧϰÂÊ¡£¼ÆË㹫ʽΪ£º

decayed_learning_rate = learning_rate *decay_rate ^ (global_step / decay_steps)

Èç¹û²ÎÊýstaircaseΪTrue£¬global_step / decay_stepsÔòΪÕûÊý³ý·¨£¬²¢ÇÒË¥¼õµÄѧϰÂÊ×ñÑ­½×Ìݺ¯Êý¡£

ʾÀý£ºÒÔ0.96Ϊµ×£¬Ã¿100000²½Ë¥¼õÒ»´Î£º

...
global_step = tf.Variable(0, trainable=False)
starter_learning_rate = 0.1
learning_rate = tf.compat.v1.train.exponential_decay(starter_learning_rate,
global_step,100000, 0.96, staircase=True)
# Passing global_step to minimize() will increment it at each step.
learning_step = (tf.compat.v1.train.GradientDescentOptimizer(learning_rate).minimize(...my loss..., global_step=global_step)
)

 

ARGS
learning_rate ±êÁ¿float32»òfloat64 TensorPythonÊý¡£³õʼѧϰÂÊ¡£
global_step ±êÁ¿int32»òint64 TensorPythonÊý¡£ÓÃÓÚË¥¼õ¼ÆËãµÄÈ«¾Ö²½Öè¡£²»ÄÜΪ¸º¡£
decay_steps ±êÁ¿int32»òint64 TensorPythonÊý¡£±ØÐëÊÇ»ý¼«µÄ¡£²Î¼ûÉÏÃæµÄË¥¼õ¼ÆËã¡£
decay_rate ±êÁ¿float32»òfloat64 TensorPythonÊý¡£Ë¥¼õÂÊ¡£
staircase ²¼¶ûÖµ¡£Èç¹ûTrueÒÔÀëÉ¢¼ä¸ôË¥¼õѧϰÂÊ
name ´®¡£²Ù×÷µÄ¿ÉÑ¡Ãû³Æ¡£Ä¬ÈÏΪ'ExponentialDecay'¡£

 

return

TensorÓëÀàÐÍÏàͬ µÄ±êÁ¿¡£Ñ§Ï°ÂÊϽµ¡£ learning_rate

Raises

ValueError Èç¹ûδÌṩ¡£ global_step