Paper Summary 2: Making Gradient

作者: 廿怎么念 | 来源:发表于2021-02-10 20:21 被阅读0次

0 Paper

Rakhlin A, Shamir O, Sridharan K. Making gradient descent optimal for strongly convex stochastic optimization. arXiv preprint arXiv:1109.5647. 2011 Sep 26.

1 Key contribution

The paper proved the followings regarding the convergence rate of stochastic gradient descent (SGD)

  1. For smooth & strongly convex problems, SGD attains \mathcal{O}(1/T) convergence rate
  2. For strongly convex problems, SGD with averaging has a lower bound \Omega(log(T)/T)
  3. For non-smooth & strongly convex problems, SGD with \alpha-suffix averaging can recover the \mathcal{O}(1/T) rate both in expectation and high probability.

2 Preliminary knowledge

  1. Problem statement
    Given some convex domain W and an unknown convex function F, using SGD to update w_t \in W so as to find the optimal solution w^* \in W, and then the goal is to provide bounds on F(w_t) - F(W^*) either in expectation or in high probability.

  2. \lambda- strongly \ convex
    F is \lambda- strongly \ convex if for all w, \ w' \in W and any subgradient g of F at w,
    F(w') \geq F(w) + \left \langle g, \ w' - w \right \rangle + \frac{\lambda}{2} \| w'-w \|^2

  3. \mu-smooth \ w.r.t \ w^*
    F(w)- F(w^*) \geq \frac{\mu}{2} \| w-w^* \|^2

  4. SGD
    At each step, SGD produces \hat{g_t} such that \mathbb{E}[\hat{g_t}]=g_t is a subgradient of F at w_t. Then, update w_t as follows.
    w_{t+1} \leftarrow \Pi_W(w_t - \eta_t \hat{g_t} )
    where, \Pi_W is the projection on W and \eta_t is the learning rate.

  5. \alpha-suffix averaging
    \hat{w}_{\alpha}^{T} = \frac{w_{(1-\alpha) T + 1} + ... + w_T}{\alpha T}

3 Main analysis

1) Smooth functions

\textbf{Theorem 1} suppose F is \lambda - strongly \ convex and \mu - smooth w.r.t. w^* over a convex set W, and \mathbb{E}[\| \hat{g_t} \|^2] \leq G^2. Then if let \eta_t = c / (\lambda t) for some constant c > 1/2, it holds for any T that
\mathbb{E}[F(w_T) - F(w^*)] \leq \frac{1}{2} \max \left \{ 4, \frac{c}{2-1/c} \right \} \frac{\mu G^2}{\lambda^2 T}

\textbf{Lemma 1} suppose F is \lambda - strongly \ convex and \mu - smooth w.r.t. w^* over a convex set W, and \mathbb{E}[\| \hat{g_t} \|^2] \leq G^2. Then if let \eta_t = c / (\lambda t) for some constant c > 1/2, it holds for any T that
\mathbb{E}[ \| w_T - w^* \|^2] \leq \frac{1}{2} \max \left \{ 4, \frac{c}{2-1/c} \right \} \frac{ G^2}{\lambda^2 T}

\textbf{Theorem 2} suppose F is \lambda - strongly \ convex and \mu - smooth w.r.t. w^* over a convex set W, \hat{w_T} is the average of \{w_t\}_{t=1}^{T} , and \mathbb{E}[\| \hat{g_t} \|^2] \leq G^2. Then if let \eta_t = c / (\lambda t) for some constant c > 1/2, it holds for any T that
\mathbb{E}[F(\hat{w_T})- F(w^*)] \leq \frac{2}{T} \max \left \{ \frac{\mu G^2}{\lambda^2}, \frac{ 4 \mu G}{\lambda}, \frac{ \mu G}{\lambda} \sqrt{\frac{ 4c}{2-1/c}} \right \}

2) Non-smooth functions

\textbf{Theorem 3} shows that when the global optimum lies at the corner of W, leading SGD to approaches the optimal from one direction, the convergence rate of SGD with averaging is \Omega(\log(T) / T).

\textbf{Theorem 4} shows that even when the global optimum lies in the interior of W, as long as SGD approaches the optimum only from one direction, the convergence rate of SGD with averaging is still \Omega(\log(T) / T).

3) SGD with \alpha-suffix averaging

\textbf{Theorem 5} Consider SGD with \alpha-suffix averaging, and with step size \\eta_t = c / (\lambda t) where c > 1/2 is a constant. Suppose F is \lambda-strongly \ convex and that\mathbb{E}[\| \hat{g_t} \|^2] \leq G for all t. Then for any T, it holds that
\mathbb{E}[F(\hat{w}_{\alpha}^{T})- F(w^*)] \leq \frac{\left ( c' + (\frac{c}{2} + c') \log (\frac{1}{1-\alpha}) \right )}{\alpha} \frac{G^2}{\lambda T}
where c' = \max \left \{ \frac{2}{c}, \frac{1}{4-2/c} \right \}

4) High probability bounds

\textbf{Lemma 2} Let \delta in (0, 1/e) and T \leq 4. Suppose F is \lambda-strongly \ convex over a convex set W, and that \mathbb{E}[\| \hat{g_t} \|^2] \leq G^2 with probalility 1. Then if we pick \eta_t = c/ (\lambda t) for some constant c > 1/2, such that 2c is a whole number, it holds with probability at least 1-\delta that for any t \in {4c^2 + 4c, ..., T-1, T} that
\| w_t - w^* \|^2 \leq \frac{12c^2 G^2}{\lambda^2 t} + 8(121G + 1) G \frac{c \log (\log (t) / \delta)}{\lambda t}

3 Some thoughts about innovation and writing

1) Innovation:

  1. Extend something already known to some unknown areas
  2. Talk about some special but important cases
  3. Establish theoretical analysis for some phenomenon
  4. Apply theoretical results to real applications

2) Writing:

  1. The title should be better if it is less than 10 words. Make it more concise and interesting!
  2. In the introduction, start with a general topic and narrow it down to the key topic of the paper step by step. Transitions could be made by saying, e.g., ”An important special case…”, “One of the … is that ”. Remember, tell a good story!
  3. Regarding literature review, include only the latest papers may be enough. Clarify the difference of your paper and other related works.
  4. Claim and list the specific contributions of the paper. Even though you may say something about what are your innovations, list them in details any way so that readers are able to know your exact contributions.

相关文章

网友评论

    本文标题:Paper Summary 2: Making Gradient

    本文链接:https://www.haomeiwen.com/subject/zyrcxltx.html