<h1 align = "center">Differential Evolution - A simple and efficient adaptive scheme for global optimization over continuous spaces</h1>
by Rainer Storn and Kenneth Price TR-95-012
March 1995
abstract
A new heuristic approach for minimizing possibly nonlinear and non differentiable continuous space functions is presented. By means of an extensive testbed, which includes the De Jong functions, it will be demonstrated that the new method converges faster and with more certainty than Adaptive Simulated Annealing as well as the Annealed Nelder&Mead approach, both of which have a reputation for being very powerful. The new method requires few control variables, is robust, easy to use and lends itself very well to parallel computation.
这是一个求解连续空间非线性不可导函数最小值的新的启发式解法。通过广泛的实验,包括使用De Jong 函数测试,可以证实新的方法比起被广泛认为很有效的自适应模拟退火和Nelder&Mead退火方法 ,收敛更快并且更准确。新的方法几乎不需要控制变量,适应性强,使用简便并且适于并行计算。
De Jong函数见http://www.geatbx.com/docu/fcnindex-01.html
The simplest test function is De Jong's function 1. It is also known as sphere model. It is continuos, convex and unimodal.
function definition:
f1(x)=sum(x(i)^2), i=1:n, -5.12<=x(i)<=5.12.
global minimum:
f(x)=0, x(i)=0, i=1:n.
introduction
Problems which involve global optimization over continuous spaces are ubiquitous throughout the scientific community. In general, the task is to optimize certain properties of a system by pertinently choosing the system parameters. For convenience, a system's parameters are usually represented as a vector. The standard approach to an optimization problem begins by designing an objective function that can model the problem's objectives while incorporating any constraints. Especially in the circuit design community, methods are in use which do not need an objective function [1], [2], [3]. Although these methods can make formulating a problem simpler, they are usually inferior to techniques which make full use of an objective function. Consequently, we restrict ourselves to optimization methods which fully use the objective function. In most cases, the objective function is designed to transform the optimization problem into a minimization task. To this end, we will limit our investigation in the following to minimization problems.
连续空间全局优化的问题在整个学术界无所不在。任务通常是通过选择恰当参数优化系统特定的特征。为了方便,系统的参数通常被表示为一个向量。解决优化问题的标准方法往往是在问题约束内对目标问题建模设计目标函数。尤其是在环形群体?,解决问题不需要目标函数。即使这些方法可以使得建模一个问题简单,他们比起充分利用目标函数的方法要低劣。因此,我们采用充分利用目标函数的优化方法。在大多数情况下,优化函数能把优化问题转化为最小化问题。因此,我们会把我们的探究转为最小化问题。
When the objective function is nonlinear and non differentiable, direct search approaches are the methods of choice. The best known of these are the algorithms by Nelder&Mead [4], by Hooke&Jeeves [4], genetic algorithms [5], and evolutionary algorithms [6], [7] with the latter being truly continuous counterparts of genetic algorithms. At the heart of every direct search method is a strategy that generates variations of the parameter vectors. Once a variation is generated, a decision must be made whether or not to accept the newly derived parameters. All basic direct search methods use the greedy criterion to make this decision. Under the greedy criterion, a new parameter vector is accepted if and only if it reduces the value of the objective function. Although the greedy decision process converges fairly fast, it runs the risk of becoming trapped by local minimum. Inherently parallel search techniques like genetic and evolutionary algorithms have some built-in safeguards to forestall misconvergence.
当目标函数是非线性并且不可导,直接搜索法是最好的方法。这其中最让我们熟知的是Nelder&Mead的算法,Hooke&Jeeves的算法,遗传算法和演化算法,演化算法是遗传算法的后来发展的对应物。所有直接搜索法的核心都是从参数向量中产生变异。一旦变异产生,必须要决定是否接受这个新的产生的参数。所有的基础的直接搜索法都采用贪心算法做这个决定。在贪心算法下,新的参数向量只有在减少目标函数值时才会被采纳。即使贪心决策过程收敛迅速,它存在被存在局部最小的风险。实际上相似的搜索策略如遗传算法和演化算法有很多内建机制预防错误收敛。
By running several vectors simultaneously, superior parameter configurations can help other vectors escape local minima. Another method which can extricate a parameter vector from a local minimum is Simulated Annealing [8], [9], [10]. Annealing relaxes the greedy criterion by occasionally permitting an uphill move. Such moves potentially allow a parameter vector to climb out of a local minimum. As the number of iterations increases, the probability of accepting an uphill move decreases. In the long run, this leads to the greedy criterion. While all direct search methods lend themselves to annealing, it has mostly been used just for the Random Walk, which itself is the simplest case of an evolutionary algorithm [6]. Nevertheless, attempts have been made to anneal other direct searches like the method of Nelder&Mead [10] and genetic algorithms [8], [11].
通过同时处理多组向量,好的参数配置可以更好的帮助其他向量逃离局部最小。其他可以帮助参数向量逃离局部最小的模拟退火。退火通过偶然允许目标函数值扩大的移动放松了贪心策略。这样的移动潜在的允许参数向量离开局部最小。随着迭代次数增加,接受上行移动的概率下降。长期来看,这趋近于贪心策略。当所有直接搜索都参与模拟退火,他们几乎可以看做随机游走,这是最简单的进化算法。然而,也有对其他直接搜索模拟退火的尝试,如Nelder&Mead和遗传算法。
网友评论