Misplaced Pages

Local unimodal sampling: Difference between revisions

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
Browse history interactively← Previous editContent deleted Content addedVisualWikitext
Revision as of 17:30, 6 December 2010 editMrOllie (talk | contribs)Extended confirmed users, Pending changes reviewers, Rollbackers237,031 editsNo edit summary← Previous edit Latest revision as of 03:04, 25 February 2011 edit undoMichael Hardy (talk | contribs)Administrators210,279 edits Redirected page to Luus–Jaakola 
(12 intermediate revisions by 2 users not shown)
Line 1: Line 1:
#REDIRECT ]
{{Proposed deletion/dated
|concern = Likely ] article on a technique of questionable notability - has no sources on technique that are independent of its originator.
|timestamp = 20101206173004
}}
'''Local unimodal sampling (LUS)''' is a method for doing numerical ] which does not require the ] of the problem to be optimized and LUS can hence be used on functions that are not ] or ]. Such optimization methods are also known as direct-search, derivative-free, or black-box methods.

LUS is attributed to Pedersen <ref name=pedersen08thesis/> and works by maintaining a single position in the search-space and moving to a new position in case of improvement to the fitness or cost function. New positions are sampled from the neighbourhood of the current position using a ]. The sampling-range is initially the full search-space and decreases exponentially during optimization.

== Motivation ==

]

]

Using a fixed sampling-range for randomly sampling the search-space of a ] the probability of finding improved positions will decrease as we approach the optimum. This is because a decreasing portion of the sampling-range will yield improved fitness (see pictures for the single-dimensional case.) Hence, the sampling range must be decreased somehow. Pedersen noted that ] works well for optimizing unimodal functions and translated its halving of the sampling-range into a formula that would have similar effect when using a uniform distribution for the sampling.

== Algorithm ==

Let ''f'':&nbsp;{{Unicode|&#x211D;}}<sup>''n''</sup>&nbsp;→ {{Unicode|&#x211D;}} be the fitness or cost function which must be minimized. Let '''x'''&nbsp;∈ {{Unicode|&#x211D;}}<sup>''n''</sup> designate a position or candidate solution in the search-space. The LUS algorithm can then be described as:

* Initialize '''x'''~''U''('''b<sub>lo</sub>''','''b<sub>up</sub>''') with a random ] position in the search-space, where '''b<sub>lo</sub>''' and '''b<sub>up</sub>''' are the lower and upper boundaries, respectively.
* Set the initial sampling range to cover the entire search-space: '''d'''&nbsp;=&nbsp;'''b<sub>up</sub>'''&nbsp;&minus;&nbsp;'''b<sub>lo</sub>'''
* Until a termination criterion is met (e.g. number of iterations performed, or adequate fitness reached), repeat the following:
** Pick a random vector '''a'''&nbsp;~&nbsp;''U''(&minus;'''d''',&nbsp;'''d''')
** Add this to the current position '''x''' to create the new potential position '''y'''&nbsp;=&nbsp;'''x'''&nbsp;+&nbsp;'''a'''
** If (''f''('''y''')&nbsp;<&nbsp;''f''('''x''')) then move to the new position by setting '''x'''&nbsp;=&nbsp;'''y''', otherwise decrease the sampling-range by multiplying with factor&nbsp;''q'' (see below): '''d'''&nbsp;=&nbsp;''q''&nbsp;'''d'''
* Now '''x''' holds the best-found position.

== Sampling-range decrease factor ==

The factor ''q'' for exponentially decreasing the sampling-range is defined by: ''q''&nbsp;=&nbsp;2<sup>&minus;''α''/''n''</sup> where ''n'' is the dimensionality of the search-space and ''α'' is a user-adjustable parameter. Setting 0<''α''<1 causes slower decrease of the sampling-range and setting ''α''&nbsp;>&nbsp;1 causes more rapid decrease of the sampling-range. Typically it is set to&nbsp;''α''&nbsp;=&nbsp;1/3

Note that decreasing the sampling-range ''n'' times by factor ''q'' results in an overall decrease of ''q''<sup>''n''</sup>&nbsp;=&nbsp;2<sup>&minus;''α''</sup> and for ''α''&nbsp;=&nbsp;1 this would mean a halving of the sampling-range.

The main differences between LUS and the Luus&ndash;Jaakola (LJ) method <ref name=luus73optimization/> are that the decrease-factor ''α'' depends on the dimensionality of the search-space ''n'' where as LJ typically just sets ''α''&nbsp;=&nbsp;0.95, and that LUS starts out by sampling the entire search-space where as LJ starts out by sampling only a fraction of the search-space.

== Usage and criticism ==

LUS is used in ] for tuning the behavioural parameters of another optimization method, see e.g. Pedersen <ref name=pedersen08thesis/> <ref name=pedersen08simplifying/>, because it is simple and usually finds satisfactory solutions using few iterations, which is necessary for such computationally expensive optimization problems. The rapidity of LUS is achieved by its exponential decrease of the sampling-range which works well for unimodal problems, but it is a weakness for multi-modal problems as it sometimes decreases the sampling-range too rapidly before it finds the basin holding the global minimum. This means LUS must typically be run several times to locate the minimum of multi-modal problems and sometimes fails completely.

LUS has also been criticized informally amongst researchers for using a greedy update rule and it has been suggested to use a stochastic update rule as in the ] or ], but it was noted in section 2.3.3 of <ref name=pedersen08thesis/> that such stochastic update rules are not useful when sampling real-valued search-spaces as they merely cause the optimizer to move back and forth towards the optimum with a still low probability of escaping local optima and a now lowered probability of converging to any optimum.

== History ==

The abbreviation LUS is the word for ] in the ]. According to Pedersen the name was chosen as a humorous tribute to the ] who have several louse-puns in their films, and not because the method has any resemblance to the movement of a real louse. The LUS method was developed in ] and ], ].

== See also ==

* ] is a related family of optimization methods which sample from a ] instead of a uniform distribution.
* ] is a related family of optimization methods which sample from a ] instead of a uniform distribution.
* ] takes steps along the axes of the search-space using exponentially decreasing step sizes.

== References ==

{{reflist|refs=
<ref name=pedersen08thesis>
{{cite book
|type=PhD thesis
|title=Tuning & Simplifying Heuristical Optimization
|url=http://www.hvass-labs.org/people/magnus/thesis/pedersen08thesis.pdf
|last=Pedersen
|first=M.E.H.
|year=2010
|publisher=University of Southampton, School of Engineering Sciences, Computational Engineering and Design Group
}}
</ref>

<ref name=pedersen08simplifying>
{{cite journal
|last=Pedersen
|first=M.E.H.
|coauthors=Chipperfield, A.J.
|url=http://www.hvass-labs.org/people/magnus/publications/pedersen08simplifying.pdf
|title=Simplifying particle swarm optimization
|journal=Applied Soft Computing
|year=2010
|volume=10
|pages=618&ndash;628
}}
</ref>

<ref name=luus73optimization>
{{cite journal
|last=Luus
|first=R.
|last=Jaakola
|first=T.H.I.
|title=Optimization by direct search and systematic reduction of the size of search region
|journal=American Institute of Chemical Engineers Journal (AIChE)
|year=1973
|volume=19
|number=4
|pages=760&ndash;766
}}
</ref>
}}

]
]

{{Optimization algorithms}}

Latest revision as of 03:04, 25 February 2011

Redirect to: