Minimum Population Search
This article includes a list of general references, but it lacks sufficient corresponding inline citations. (December 2017) |
In evolutionary computation, Minimum Population Search (MPS) is a computational method that optimizes a problem by iteratively trying to improve a set of candidate solutions with regard to a given measure of quality. It solves a problem by evolving a small population of candidate solutions by means of relatively simple arithmetical operations.
MPS is a metaheuristic as it makes few or no assumptions about the problem being optimized and can search very large spaces of candidate solutions. For problems where finding the precise global optimum is less important than finding an acceptable local optimum in a fixed amount of time, using a metaheuristic such as MPS may be preferable to alternatives such as brute-force search or gradient descent.
MPS is used for multidimensional real-valued functions but does not use the gradient of the problem being optimized, which means MPS does not require for the optimization problem to be differentiable as is required by classic optimization methods such as gradient descent and quasi-newton methods. MPS can therefore also be used on optimization problems that are not even continuous, are noisy, change over time, etc.
Background
[edit]In a similar way to Differential evolution, MPS uses difference vectors between the members of the population in order to generate new solutions. It attempts to provide an efficient use of function evaluations by maintaining a small population size. If the population size is smaller than the dimensionality of the search space, then the solutions generated through difference vectors will be constrained to the dimensional hyperplane. A smaller population size will lead to a more restricted subspace. With a population size equal to the dimensionality of the problem , the “line/hyperplane points” in MPS will be generated within a dimensional hyperplane. Taking a step orthogonal to this hyperplane will allow the search process to cover all the dimensions of the search space.[1]
Population size is a fundamental parameter in the performance of population-based heuristics. Larger populations promote exploration, but they also allow fewer generations, and this can reduce the chance of convergence. Searching with a small population can increase the chances of convergence and the efficient use of function evaluations, but it can also induce the risk of premature convergence. If the risk of premature convergence can be avoided, then a population-based heuristic could benefit from the efficiency and faster convergence rate of a smaller population. To avoid premature convergence, it is important to have a diversified population. By including techniques for explicitly increasing diversity and exploration, it is possible to have smaller populations with less risk of premature convergence.[1]
Thresheld Convergence
[edit]Thresheld Convergence (TC) is a diversification technique which attempts to separate the processes of exploration and exploitation. TC uses a “threshold” function to establish a minimum search step, and managing this step makes it possible to influence the transition from exploration to exploitation, convergence is thus “held” back until the last stages of the search process.[2] The goal of a controlled transition is to avoid an early concentration of the population around a few search regions and avoid the loss of diversity which can cause premature convergence. Thresheld Convergence has been successfully applied to several population-based metaheuristics such as Particle Swarm Optimization, Differential evolution,[3] Evolution strategies,[4] Simulated annealing[5] and Estimation of Distribution Algorithms.
The ideal case for Thresheld Convergence is to have one sample solution from each attraction basin, and for each sample solution to have the same relative fitness with respect to its local optimum. Enforcing a minimum step aims to achieve this ideal case. In MPS Thresheld Convergence is specifically used to preserve diversity and avoid premature convergence by establishing a minimum search step. By disallowing new solutions which are too close to members of the current population, TC forces a strong exploration during the early stages of the search while preserving the diversity of the (small) population.
Algorithm
[edit]A basic variant of the MPS algorithm works by having a population of size equal to the dimension of the problem. New solutions are generated by exploring the hyperplane defined by the current solutions (by means of difference vectors) and performing an additional orthogonal step in order to avoid getting caught in this hyperplane. The step sizes are controlled by the Thresheld Convergence technique, which gradually reduces step sizes as the search process advances.
An outline for the algorithm is given below:
- Generate the first initial population. Allowing these solutions to lie near the bounds of the search space generally gives good results: where is the -th population member, are random numbers which can be −1 or 1, and the are the lower and upper bounds on each dimension.
- While a stop condition is not reached:
- Update threshold convergence values ( and )
- Calculate the centroid of the current population ()
- For each member of the population (), generate a new offspring as follows:
- Uniformly generate a scaling factor () between and
- Generate a vector () orthogonal to the difference vector between and
- Calculate a scaling factor for the orthogonal vector:
- Generate the new solution by adding the difference and the orthogonal vectors to the original solution
- Pick the best members between the old population and the new one by discarding the least fit members.
- Return the single best solution or the best population found as the final result.
References
[edit]- ^ a b Bolufé-Röhler, Antonio; Chen, Stephen (2013). "Minimum Population Search - Lessons from building a heuristic technique with two population members.". Congress on Evolutionary Computation (CEC'2013). pp. 2061–2068.
- ^ Chen, Stephen; Montgomery, James; Bolufé-Röhler, Antonio; Gonzalez-Fernandez, Yasser (2015). A Review of Thresheld Convergence. GECONTEC: Revista Internacional de Gestión del Conocimiento y la Tecnología.
- ^ Bolufé-Röhler, Antonio; Estévez-Velarde, Suilán; Piad-Morffis, Alejandro; Chen, Stephen; Montgomery, James (2013). "Differential Evolution with Thresheld Convergence". Congress on Evolutionary Computation (CEC'2013). pp. 40–47.
- ^ Piad-Morffis, Alejandro; Estévez-Velarde, Suilán; Bolufé-Röhler, Antonio; Montgomery, James; Chen, Stephen (2015). "Evolution strategies with thresheld convergence". Congress on Evolutionary Computation (CEC'2015). pp. 2097–2104.
- ^ Chen, S.; Xudiera, C.; Montgomery, J. (2012). "Simulated annealing with thresheld convergence". IEEE Congress on Evolutionary Computation (CEC). pp. 1–7.