凸优化与非凸优化

凸优化:凸优化在某种意义上说较一般情形的数学最优化问题要简单,譬如在凸优化中局部最优值必定是全局最优值。

非凸优化:

对于凸优化来说,局部最小就是全局最小,因此问题变得简单。

平滑与非平滑

平滑:

处处可导

非平滑:

第二章 无约束优化(线搜索框架下)

1. 梯度法(最速下降法)

梯度(gradient):增长最快的方向,导数的高维扩展

g(x)=f(x)=[f(x)x1,,f(x)xn]T g(x)=\nabla f(x)=[\frac {\partial f(x)}{\partial x_1},\ldots,\frac {\partial f(x)}{\partial x_n}]^T

先回答需要考虑的问题:

  1. 负梯度方向,“速”非快

dk=f(xk)Tf(xk) d_k=-\frac {\nabla f(x_k)^T}{||\nabla f(x_k)||}

  1. 一维搜索求λk\lambda_k,使得

φ(λ)=f(xk+λdk)f(xk+λkdk)=minλ0f(xk+λdk) \begin{aligned} \varphi(\lambda)=f(x_k+\lambda d_k) \\ f(x_k+\lambda_kd_k)=\min_{\lambda\geq 0}f(x_k+\lambda d_k) \end{aligned}

  1. 最速下降法适用于寻优过程的前期迭代或作为间插步骤,当接近极值点时,宜选用收敛快的算法.

3. 共轭梯度法


正如从上面例子中看到的,简单梯度下降算法的一个问题是,它试着摇摆穿越峡谷,每次跟随梯度的方法,以便穿越峡谷。共轭梯度通过添加摩擦力项来解决这个问题: 每一步依赖于前两个值的梯度然后急转弯减少了。

首先理解特征值:

  • 线性变换L\mathcal L
  • n×1n \times 1向量uu \Longrightarrow 特征向量
  • 比例引子λ\lambda \Longrightarrow 特征值

    Lu=λu,u0 \mathcal Lu=\lambda u, u\neq 0

也就是经过一个线性变换后,在沿着uu的方向上只发生了缩放,原来的方向不变,就好像这个方向是枚“定海神针”,牢牢定义了这个线性变换。

M=UΣUT M=U \Sigma U^T

  • 第一步:旋转 UTU^T
  • 第二步:Σ\Sigma对角矩阵,相当于进行缩放
  • 第三步:旋转 UU,转回去

什么是共轭

  • GG: n×nn \times n对称正定矩阵
  • p0,p1,,pk{p_0,p_1,\ldots,p_k}: 非零向量组合

若满足

piTApj=0,ij p_i^T A p_j=0,\forall i\neq j

说明这一组向量有AA-正交性/共轭性,或者说是线性无关(n维至多n个),从这儿也能看出来,共轭是正交的扩展:当AA为单位向量II时即是正交,就好像梯度是导数的扩展(?)

  • 所谓共轭方向方法即是所有使用共轭向量作为梯度更新方向的算法。
  • 共轭梯度算法则是在迭代过程中利用梯度下降法来更新共轭向量组。

从任意点x0x_0出发,沿任意下降方向pip_i做直线搜索得到x1x_1,接着沿与pip_i共轭的pjp_j方向搜索,反复迭代

2. Newton Method

牛顿法使用局部二元近似来计算跳跃的方向。为了这个目的,他们依赖于函数的前两个导数梯度和Hessian

Δw=ϵH(w)1dEdw \Delta \mathbf w =-\epsilon H(\mathbf w)^{-1}\frac {\mathrm d E}{\mathrm d \mathbf w}

  • On a real quadritic surface it jumps to the minimum in one step.
  • Unfortunately, with only a million weights, the curvature matrix has a trillion terms and it is totally infeasible to invert it.

Curvature Matrices: H(w)

  • Each element in the curvature matrix specifies how the gradient in one direction changes as we move in some other direction.
    • The off-diagonal terms correspond to twists in the error surface.
  • The reason steepest descent goes wrong is that the gradient for one weight gets messed up by the simultaneous changes to all the other weights.
    • The curvature matrix determines the sizes of these interactions.

The genuinely quadratic durface is the quadratic approximation to the true surface.

f(x)=12xTAx+bTx f(x)=\frac 12 x^T Ax+b^T x

g(x)g(x)是quadraticf(x)f(x)的导数,求f(x)f(x)的极小点则是求g(x)g(x)的零点,于是有:

一维

割线法(secant)

xk+1=xkg(xk)g(xk) x_{k+1}=x_k-\frac {g(x_k)}{g'(x_k)}

推广至高维

Hessian矩阵H=2f(x)\mathbf H=\nabla^2 f(\mathbf x)

xk+1=xkf(xk)2f(xk) \mathbf x_{k+1}=\mathbf x_k-\frac {\nabla f(\mathbf x_k)}{\nabla^2f(\mathbf x_k)}

Lipschitz连续

优点

  • 二阶收敛
  • 也可能不收敛

    x1=1,f(x1)=2,f(x1)=1x2=1,f(x2)=2,f(x1)=1x3=1, \begin{aligned} x_1=1,f'(x_1)=-2 ,f''(x_1)=1 \\ x_2=1,f'(x_2)=2 ,f''(x_1)=1 \\ x_3=-1,\ldots \end{aligned}

缺点

  • 要求Hessian矩阵要可逆
  • 需要计算二阶导数和逆矩阵,计算量和存储量 开销较大

2. Quasi-Newton Method

In the Hessian-free method, we make an approximation to the curvature matrix and then, assuming that approcimation is correct, we minimize the error using an efficient technique called conjugate gradient. Then we make another approximation to the curvature matrix and minimize again.

  • For RNNs its important to add a penalty for changing any of the hidden activities too much.

近似Hessian的逆矩阵

dkQN=Hkgk{H0,,Hn} \begin{aligned} d_k^{QN}=-H_kg_k \\ \{H_0,\ldots,H_n\} \end{aligned}

无记忆拟牛顿法和共轭梯度法联系最紧密

BFGS

(Broyden-Fletcher-Goldfarb-Shanno算法) 改进了每一步对Hessian的近似。

L-BFGS:
限制内存的BFGS介于BFGS和共轭梯度之间: 在非常高的维度 (\gt 250) 计算和翻转的Hessian矩阵的成本非常高。L-BFGS保留了低秩的版本。此外,scipy版本, scipy.optimize.fmin_l_bfgs_b(), 包含箱边界:

Conjugate Gradients(CG)

  • Use a sequence of steps each of which finds the minimum along one direction.
  • Make sure that each direction is "conjugate" to the previous directions so you do not mess up the minimization you already did.

    • conjugate : when you go in the new direction, you do not change the gradients in the previous direction.
  • After N step, conjugate gradient is guaranteed to find the minimum of N-dimensional quadratic surface.

  • Conjugate gradient can be apllied to a non-quadratic error surface and it usually works quite well.

  • The HF optimizer uses CG for minimization on a genuinely quadratic surface where it excels.

3.线性搜索

  • 搜索方向dkd_k
  • 步长因子αk\alpha_k

    xk+1=xk+αkdkϕ(α)=f(xk+αdk) \begin{aligned} x_{k+1}=x_k+\alpha_k d_k \\ \phi(\alpha)=f(x_k+\alpha d_k) \end{aligned}

目标就转化成了ϕ(αk)<ϕ(0)\phi(\alpha_k) \lt \phi(0)

精确线性搜索

αk=min{α>0f(xk+αdk)Tdk=0} \alpha_k=\min\{\alpha>0|\nabla f(x_k+\alpha d_k)^Td_k=0\}

目标在于沿着dkd_k方向让目标函数达到极小。这样得到的αk\alpha_k精确步长因子

  • AA : 半正定对称矩阵

f(x)=12xTAx+bTx f(x)=\frac 12 x^T Ax+b^Tx

ϕ(α)=12(xk+αdk)TA(xk+αdk)+bT(xk+αdk)=12dktAdkα2+(Axk+b)Tdkαϕ(α)=dkTAdkα+(Axk+b)Tdk=0f(x)=Ax+b \begin{aligned} \phi(\alpha)=\frac 12 (x_k+\alpha d_k)^T A (x_k+\alpha d_k)+b^T(x_k+\alpha d_k)\\ = \frac 12 d_k^tAd_k \cdot \alpha^2+(Ax_k+b)^T d_k \alpha \\ \phi '(\alpha)=d_k^T A d_k \cdot \alpha+(Ax_k+b)^Td_k=0 \\ \nabla f(x)=Ax+b \end{aligned}

精确步长(stride)就可以表示出来了:

αk=f(xk)TdkdkTAdk \alpha_k '=-\frac{\nabla f(x_k)^Td_k}{d_k^TAd_k}

正交条件

不精确线性搜索

进退法——确定初始搜索区间

高-低-高

二次收敛性

当目标函数是二次函数时,共轭方向法最多经过NN步迭代(NN=向量维数),就可达到极小值点。

bb法

第三章 有约束优化

L(x,λ)=f(x)i=1mλici(x) L(x,\mathbf \lambda)=f(x)-\sum_{i=1}^m\lambda_i c_i(x)

  • c(x)=(c1(x),,cn(x))\mathbf c(x)=(c_1(x),\ldots,c_n(x))^\top
  • λ=(λ1,,λn)\mathbf \lambda=(\lambda_1,\ldots,\lambda_n)^\top
  • xL(x,λ)=f(x)λc(x)=0\nabla_xL(x,\mathbf \lambda)=\nabla f(x)-\mathbf \lambda^\top \nabla \mathbf c(x)=0
  • λL(x,λ)=c(x)=0\nabla_\lambda L(x,\mathbf \lambda)=-\mathbf c(x)=0

1<x1<1-1 \lt x_1 \lt 1
1<x2<1-1 \lt x_2 \lt 1

1. 信赖域

  • 当前点,用二次函数近似当前函数:qk()f(x)q_k(\cdot)\approx f(x)
  • 求解qkq_k极小值点作为下一次试探点
  • 先确定区域Δk\Delta k,可选不同范数

    • 无穷范数:maxxi\max{\|x_i\|}
  • skc=τkΔkgkgks_k^c=-\tau_k \frac{\Delta k}{||g_k||}g_k

  • τk={1,gkBkgk0mingk3ΔkgkBkgk,1,otherwise\tau_k=\begin{cases} 1, &g_k^\top B_kg_k\leq 0\cr \min{\frac{||g_k||^3}{\Delta kg_k^\top B_kg_k},1}, &otherwise \end{cases}

动态调整

折线法

Δk=12\Delta_k=\frac 12,第kk步求解子问题时用双折线法求解

f(x)=x14+x12+x22 f(x)=x_1^4+x_1^2+x_2^2

由于xk=(1,1)\mathbf x_k=(1,1)^\top,则

{% math_inline %}g_k=\nabla f(\mathbf x_k)=\left[  
\begin{matrix}  
4x_1^3+2x_1\2x_2  
\end{matrix}  
\right]{% endmath_inline %}

,从而得到gk=(6,2)g_k=(6,2)^\top
同理可得:

Gk=2f(xk)=[12x12+2002]=[14002]skN=Gk1gk=[371]skc=gk22gkGkgkgk(0.469,0.156) \begin{aligned} G_k=\nabla^2 f(\mathbf x_k)=\left[ \begin{matrix} 12x_1^2+2&0\\ 0&2 \end{matrix} \right]=\left[ \begin{matrix} 14&0\\ 0&2 \end{matrix} \right]\\ s_k^N=-G_k^{-1}g_k=\left[ \begin{matrix} - \frac 37\\ -1 \end{matrix} \right]\\ s_k^c=-\frac {||g_k||_2^2}{g_k^\top G_kg_k} g_k\approx(-0.469,-0.156)^\top \end{aligned}

由于skc0.496Δk||s_k^c||\cong 0.496 \Delta_k,从而需要计算牛顿步skN^s_k^{\hat N},有

γ=gk24(gkGkgk)(gkGk1gk)=(40)2(512)(327)0.684η=0.8γ+0.20.747,skN^=ηskN[0.3200.747] \begin{aligned} \gamma=\frac {||g_k||_2^4}{(g_k^\top G_kg_k)(g_k^\top G_k^{-1}g_k)}=\frac{(40)^2}{(512)(\frac{32}7)}\cong 0.684\\ \eta=0.8\gamma +0.2\cong 0.747,\\ s_k^{\hat N}=\eta s_k^N \cong \left[ \begin{matrix} - 0.320\\ - 0.747 \end{matrix} \right] \end{aligned}

由于skN^>Δk\|\|s_k^{\hat N}\|\|\gt\Delta_k,故取双折线步长为sk=skc+λ(skN^skc),λ(0,1)s_k=s_k^c+\lambda(s_k^{\hat N}-s_k^c),\lambda \in (0,1),使得skN^=Δk\|s_k^{\hat N}\|\|=\Delta_k
解二次方程:

skc+λ(skN^skc)22=Δk2 ||s_k^c+\lambda(s_k^{\hat N}-s_k^c)||_2^2=\Delta_k^2

λ0.876\lambda \cong 0.876。因此,

sk=skc+λ(skN^skc)=[0.3400.669]xk+1=xk+sk=[0.6600.331] \begin{aligned} s_k=s_k^c+\lambda(s_k^{\hat N}-s_k^c)=\left[ \begin{matrix} - 0.340\\ - 0.669 \end{matrix} \right]\\ \mathbf x_{k+1}=\mathbf x_k+s_k=\left[ \begin{matrix} - 0.660\\ - 0.331 \end{matrix} \right] \end{aligned}

2. 积极集法

* {% math_inline %}y\in {y|\nabla_ic(x^*)^\top y=0,\forall i}{% endmath_inline %}在{% math_inline %}c(x)=0{% endmath_inline %}这个超平面上,等价于在无约束情况{% math_inline %}y\in \mathbb R^n{% endmath_inline %}。
* 用脚标的集合表示哪些到达边界约束。

  * {% math_inline %}x^_{% endmath_inline %}处的积极集:{% math_inline %}I(x^_)={i\|c_i(x^)=0}\Longleftrightarrow \exists \lambda_0,\ldots,\lambda_n\geq0  s.t. \lambda_0\nabla f(x^)-\sum_{i=1}^m \lambda_i^_ \nabla_ic_i(x^_)=0{% endmath_inline %} 
  * 如果{% math_inline %}\nabla c_i(x^)(i\in f(x^)){% endmath_inline %}线性无关,则存在向量{% math_inline %}\mathbf \lambda^{% endmath_inline %},使得{% math_inline %}\nabla f(x^)-\sum_{i=1}^m \lambda_i^_ \nabla_ic_i(x^_)=0,\lambda_i^_\geq0{% endmath_inline %}对偶变量,{% math_inline %}\lambda_i^_c_i(x^*)=0{% endmath_inline %}互补松弛条件{% math_inline %}

  * 带约束问题{% endmath_inline %}\rightarrow{% math_inline %}方程组求解
* 目标函数下降方向集合{% endmath_inline %}F={d\|\nabla f(x)^\top d\lt0}{% math_inline %}
* 可行方向:{% endmath_inline %}D={d|\nabla c_i(x)^\top d\geq0,i\in I(x)}{% math_inline %}

{f(x)d<0ci(x)d0无解FxDx=ϕ \begin{cases} \nabla f(x)^\top d<0\cr -\nabla c_i(x)^\top d\leq 0 \end{cases}\text{无解}\Longleftrightarrow F_{x^*}\cap D_{x^*}=\phi

Gordan定理:{% endmath_inline %}\mathbf A\mathbf x \gt 0{% math_inline %}有解{% endmath_inline %}\Longleftrightarrow{% math_inline %}不存在{% endmath_inline %}\mathbf y \geq 0{% math_inline %}使得{% endmath_inline %}\mathbf A^\top \mathbf y=0{% math_inline %}

举例:用积极集法求解

minx12x1x2+2x22x110x2 \min x_1^2-x_1x_2+2x_2^2-x_1-10x_2 s.t.3x12x26 s.t. -3x_1-2x_2\geq -6 x10,x20 x_1\geq0,x_2\geq 0

变为标准型:

minx12x1x2+2x22x110x2 \min x_1^2-x_1x_2+2x_2^2-x_1-10x_2 s.t.3x1+2x26,x10,x20 s.t. 3x_1+2x_2\leq 6,-x_1\leq0,-x_2\leq 0

记上面的约束为1到3

G=[2114],r=[110],A=[321001],b=[600] \begin{aligned} G= \left[ \begin{matrix} 2&-1 \\ -1&4 \end{matrix} \right], r= \left[ \begin{matrix} -1 \\ -10 \end{matrix} \right], \\ A= \left[ \begin{matrix} 3&2 \\ -1&0 \\ 0&-1 \end{matrix} \right], b= \left[ \begin{matrix} 6 \\ 0 \\ 0 \end{matrix} \right] \end{aligned}

计算{% endmath_inline %}\nabla f(\mathbf x)=(2x_1-x_2-1,4x_2-x_1-10)^\top{% math_inline %}。因为{% endmath_inline %}\mathbf x^1=(\frac 27,\frac {18}7)^\top{% math_inline %},所以有效约束{% endmath_inline %}I(\mathbf x^1)={1}{% math_inline %},{% endmath_inline %}\nabla f(\mathbf x^1)=(-3,0)^\top{% math_inline %},相应的等式约束问题为

mind12d1d2+2d22d110d2,s.t.3d1+2d2=6 \min d_1^2-d_1d_2+2d_2^2-d_1-10d_2, s.t. 3d_1+2d_2= 6

由等式约束问题的一阶必要条件得到

[213142320][d1d2λ1]=[1106] \begin{aligned} \left[ \begin{matrix} 2&-1&3 \\ -1&4&2 \\ 3&2&0 \end{matrix} \right] \left[ \begin{matrix} d_1\\ d_2\\ \lambda_1 \end{matrix} \right] = \left[ \begin{matrix} 1\\ 10\\ 6 \end{matrix} \right] \end{aligned}

求解方程组得到  
{% endmath_inline %}d_1=\frac 12,d_2=\frac 94,\lambda_1=\frac 34{% math_inline %},即{% endmath_inline %}\mathbf \lambda=(\frac 34,0,0)^\top{% math_inline %},{% endmath_inline %}\mathbf x^2=(\frac 12,\frac 94)^\top{% math_inline %}是最优解。

3. 直接法

1. {% endmath_inline %}x_1\in \mathbb R^n,d_1,\ldots,d_n线性无关{% math_inline %}
2. {% endmath_inline %}x_1{% math_inline %}沿{% endmath_inline %}d_1{% math_inline %}线性搜索得{% endmath_inline %}x_2{% math_inline %},即{% endmath_inline %}x_2{% math_inline %}是{% endmath_inline %}f(x){% math_inline %}在{% endmath_inline %}{z\|z=x_1+\alpha_1 d_1}{% math_inline %}上的极小值点
3. {% endmath_inline %}\beta_1,\ldots,\beta_n,\nu^{(1)}=x_2+\beta_2d_2{% math_inline %},令{% endmath_inline %}d_2=\nu^{(2)}-x_2{% math_inline %},{% endmath_inline %}\nu^{(2)}{% math_inline %}是{% endmath_inline %}f(x){% math_inline %}在{% endmath_inline %}{z\|z=\nu^{(1)}+\alpha_1 d_1}{% math_inline %}上的极小值点
   * {% endmath_inline %}\nabla f(\nu^{(1)})^\top d_1=0,\nabla f(x_2)^\top d_1=0{% math_inline %} 
   * {% endmath_inline %}d_2^\top Hd_1=0{% math_inline %}(共轭,{% endmath_inline %}d_2=\nu^{(2)}-x_2{% math_inline %})
4. 令{% endmath_inline %}d_3=\nu^{(3)}-x_3{% math_inline %},则{% endmath_inline %}\nu^{(3)}{% math_inline %}是{% endmath_inline %}f(x){% math_inline %}在{% endmath_inline %}{z\|z=x_1+\alpha_1 d_1+\alpha_2 d_2}{% math_inline %}上的极小值点
5. {% endmath_inline %}x_{k+1}{% math_inline %}是{% endmath_inline %}f(x){% math_inline %}在{% endmath_inline %}{z\|z=x_1+\sum_{i=1}^k\alpha_i d_i}{% math_inline %}上的极小值点,令{% endmath_inline %}d_{k+1}=\nu^{(k+1)}-x_{k+1}$

Reference

  • 《最优化理论与方法》陈宝林
  • 《非线性优化计算方法》袁亚湘
  • 《Numerical Optimization》Jorge Nocedal

术语表

二阶(quadratic)

其他

0.618法

用0.618法寻找最佳点时,虽然不能保证在有限次内准确找出最佳点,但随着试验次数的增加,最佳点被限定在越来越小的范围内,即存优范围会越来越小。用存优范围与原始范围的比值来衡量一种试验方法的效率,这个比值叫精度。用0.618法确定试点时,每一次实验都把存优范围缩小为原来的0.618.因此,n次试验后的精度为:

δn=0.618n \delta _n=0.618^n

KKT条件

In mathematical optimization, the Karush–Kuhn–Tucker (KKT) conditions (also known as the Kuhn–Tucker conditions) are first order necessary conditions for a solution in nonlinear programming to be optimal, provided that some regularity conditions are satisfied. Allowing inequality constraints, the KKT approach to nonlinear programming generalizes the method of Lagrange multipliers, which allows only equality constraints. The system of equations corresponding to the KKT conditions is usually not solved directly, except in the few special cases where a closed-form solution can be derived analytically. In general, many optimization algorithms can be interpreted as methods for numerically solving the KKT system of equations.

编程

Anaconda

footnote1. 《2.7 数学优化:找到函数的最优解

results matching ""

    No results matching ""