Revision as of 13:56, 13 December 2010 editOptimering (talk | contribs)264 edits Added references to applications re. notability criticism + minor cleanup. See discussion-page.← Previous edit | Latest revision as of 03:04, 25 February 2011 edit undoMichael Hardy (talk | contribs)Administrators210,279 edits ←Redirected page to Luus–Jaakola | ||
(11 intermediate revisions by 2 users not shown) | |||
Line 1: | Line 1: | ||
#REDIRECT ] | |||
'''Local unimodal sampling (LUS)''' is a method for doing numerical ] which does not require the ] of the problem to be optimized and LUS can hence be used on functions that are not ] or ]. Such optimization methods are also known as direct-search, derivative-free, or black-box methods. | |||
LUS is attributed to Pedersen <ref name=pedersen08thesis/> and works by having a single position in the search-space and moving to a new position in case of improvement to the fitness or cost function. New positions are sampled from the neighbourhood of the current position using a ]. The sampling-range is initially the full search-space and decreases exponentially during optimization. | |||
LUS is similar to the Luus-Jaakola (LJ) optimizer <ref name=luus73optimization/> with the main difference being an improvement in determining the exponential rate of sampling-range decrease, ]. These optimizers have been applied successfully in such diverse fields as ] <ref name=bojkov93application/>, ] <ref name=spaans92importance/>, ] <ref name=papangelakis93reactor/>, ] <ref name=lee99phase/>, and ] <ref name=schuth10thesis/>, to name a few. | |||
== Motivation == | |||
] | |||
] | |||
Using a fixed sampling-range for randomly sampling the search-space of a ] the probability of finding improved positions will decrease as we approach the optimum. This is because a decreasing portion of the sampling-range will yield improved fitness (see pictures for the single-dimensional case.) Hence, the sampling range must be decreased somehow. ] works well for optimizing unimodal functions and its halving of the sampling-range was translated into a formula that would have similar effect when using a uniform distribution for the sampling. | |||
== Algorithm == | |||
Let ''f'': {{Unicode|ℝ}}<sup>''n''</sup> → {{Unicode|ℝ}} be the fitness or cost function which must be minimized. Let '''x''' ∈ {{Unicode|ℝ}}<sup>''n''</sup> designate a position or candidate solution in the search-space. The LUS algorithm can then be described as: | |||
* Initialize '''x'''~''U''('''b<sub>lo</sub>''','''b<sub>up</sub>''') with a random ] position in the search-space, where '''b<sub>lo</sub>''' and '''b<sub>up</sub>''' are the lower and upper boundaries, respectively. | |||
* Set the initial sampling range to cover the entire search-space: '''d''' = '''b<sub>up</sub>''' − '''b<sub>lo</sub>''' | |||
* Until a termination criterion is met (e.g. number of iterations performed, or adequate fitness reached), repeat the following: | |||
** Pick a random vector '''a''' ~ ''U''(−'''d''', '''d''') | |||
** Add this to the current position '''x''' to create the new potential position '''y''' = '''x''' + '''a''' | |||
** If (''f''('''y''') < ''f''('''x''')) then move to the new position by setting '''x''' = '''y''', otherwise decrease the sampling-range by multiplying with factor ''q'' (see below): '''d''' = ''q'' '''d''' | |||
* Now '''x''' holds the best-found position. | |||
== Sampling-range decrease factor == | |||
The factor ''q'' for exponentially decreasing the sampling-range is defined by: ''q'' = 2<sup>−''α''/''n''</sup> where ''n'' is the dimensionality of the search-space and ''α'' is a user-adjustable parameter. Setting 0<''α''<1 causes slower decrease of the sampling-range and setting ''α'' > 1 causes more rapid decrease of the sampling-range. Typically it is set to ''α'' = 1/3 | |||
Note that decreasing the sampling-range ''n'' times by factor ''q'' results in an overall decrease of ''q''<sup>''n''</sup> = 2<sup>−''α''</sup> and for ''α'' = 1 this would mean a halving of the sampling-range. | |||
The main differences between LUS and the Luus–Jaakola (LJ) method <ref name=luus73optimization/> are that the decrease-factor ''α'' depends on the dimensionality of the search-space ''n'' where as LJ typically just sets ''α'' = 0.95, and that LUS starts out by sampling the entire search-space where as LJ starts out by sampling only a fraction of the search-space. | |||
== Criticism == | |||
LUS has been criticized for using a greedy update rule and it has been suggested to use a stochastic update rule as in the ] or ], but it was noted in section 2.3.3 of <ref name=pedersen08thesis/> that such stochastic update rules are not useful when sampling real-valued search-spaces as they merely cause the optimizer to move back and forth towards the optimum with a still low probability of escaping local optima and a now lowered probability of converging to any optimum. | |||
== See also == | |||
* ] is a related family of optimization methods which sample from a ] instead of a uniform distribution. | |||
* ] is a related family of optimization methods which sample from a ] instead of a uniform distribution. | |||
* ] takes steps along the axes of the search-space using exponentially decreasing step sizes. | |||
== References == | |||
{{reflist|refs= | |||
<ref name=pedersen08thesis> | |||
{{cite book | |||
|type=PhD thesis | |||
|title=Tuning & Simplifying Heuristical Optimization | |||
|url=http://www.hvass-labs.org/people/magnus/thesis/pedersen08thesis.pdf | |||
|last=Pedersen | |||
|first=M.E.H. | |||
|year=2010 | |||
|publisher=University of Southampton, School of Engineering Sciences, Computational Engineering and Design Group | |||
}} | |||
</ref> | |||
<ref name=schuth10thesis> | |||
{{cite book | |||
|type=MSc thesis | |||
|title=Tuning methods in statistical machine translation | |||
|url=http://www.anneschuth.nl/wp-content/uploads/2010/10/thesis.anne_.schuth.pdf | |||
|last=Schuth | |||
|first=A.G. | |||
|year=2010 | |||
|publisher=University of Amsterdam | |||
}} | |||
</ref> | |||
<ref name=luus73optimization> | |||
{{cite journal | |||
|last1=Luus | |||
|first1=R. | |||
|last2=Jaakola | |||
|first2=T.H.I. | |||
|title=Optimization by direct search and systematic reduction of the size of search region | |||
|journal=American Institute of Chemical Engineers Journal (AIChE) | |||
|year=1973 | |||
|volume=19 | |||
|number=4 | |||
|pages=760–766 | |||
}} | |||
</ref> | |||
<ref name=bojkov93application> | |||
{{cite journal | |||
|last1=Bojkov | |||
|first1=R. | |||
|last2=Hansel | |||
|first2=B. | |||
|last3=Luus | |||
|first3=R. | |||
|title=Application of direct search optimization to optimal control problems | |||
|journal=Hungarian Journal of Industrial Chemistry | |||
|year=1993 | |||
|volume=21 | |||
|pages=177–185 | |||
}} | |||
</ref> | |||
<ref name=spaans92importance> | |||
{{cite journal | |||
|last1=Spaans | |||
|first1=R. | |||
|last2=Luus | |||
|first2=R. | |||
|title=Importance of search-domain reduction in random optimization | |||
|journal=Journal of Optimization Theory and Applications | |||
|year=1992 | |||
|volume=75 | |||
|pages=635–638 | |||
}} | |||
</ref> | |||
<ref name=lee99phase> | |||
{{cite journal | |||
|last1=Lee | |||
|first1=Y.P. | |||
|last2=Rangaiah | |||
|first2=G.P. | |||
|last3=Luus | |||
|first3=R. | |||
|title=Phase and chemical equilibrium calculations by direct search optimization | |||
|journal=Computers & Chemical Engineering | |||
|year=1999 | |||
|volume=23 | |||
|number=9 | |||
|pages=1183–1191 | |||
}} | |||
</ref> | |||
<ref name=papangelakis93reactor> | |||
{{cite conference | |||
|last1=Papangelakis | |||
|first1=V.G. | |||
|last2=Luus | |||
|first2=R. | |||
|title=Reactor optimization in the pressure oxidization process | |||
|booktitle=Proc. Int. Symp. on Modelling, Simulation and Control of Metallurgical Processes | |||
|year=1993 | |||
|pages=159–171 | |||
}} | |||
</ref> | |||
}} | |||
] | |||
] | |||
{{Optimization algorithms}} |
Latest revision as of 03:04, 25 February 2011
Redirect to: