Misplaced Pages

Navigation function

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.

Navigation function usually refers to a function of position, velocity, acceleration and time which is used to plan robot trajectories through the environment. Generally, the goal of a navigation function is to create feasible, safe paths that avoid obstacles while allowing a robot to move from its starting configuration to its goal configuration.

Potential functions as navigation functions

A potential function. Imagine dropping a marble on the surface. It will avoid the three obstacles and eventually reach the goal position in the center.

Potential functions assume that the environment or work space is known. Obstacles are assigned a high potential value, and the goal position is assigned a low potential. To reach the goal position, a robot only needs to follow the negative gradient of the surface.

We can formalize this concept mathematically as following: Let X {\displaystyle X} be the state space of all possible configurations of a robot. Let X g X {\displaystyle X_{g}\subset X} denote the goal region of the state space.

Then a potential function ϕ ( x ) {\displaystyle \phi (x)} is called a (feasible) navigation function if

  1. ϕ ( x ) = 0   x X g {\displaystyle \phi (x)=0\ \forall x\in X_{g}}
  2. ϕ ( x ) = {\displaystyle \phi (x)=\infty } if and only if no point in X g {\displaystyle {X_{g}}} is reachable from x {\displaystyle x} .
  3. For every reachable state, x X X g {\displaystyle x\in X\setminus {X_{g}}} , the local operator produces a state x {\displaystyle x'} for which ϕ ( x ) < ϕ ( x ) {\displaystyle \phi (x')<\phi (x)} .

Probabilistic navigation function

Probabilistic navigation function is an extension of the classical navigation function for static stochastic scenarios. The function is defined by permitted collision probability, which limits the risk during motion. The Minkowski sum used for in the classical definition is replaced with a convolution of the geometries and the Probability Density Functions of locations. Denoting the target position by x d {\displaystyle x_{d}} , the Probabilistic navigation function is defined as: φ ( x ) = γ d ( x ) [ γ d K ( x ) + β ( x ) ] 1 K {\displaystyle {\varphi }(x)={\frac {\gamma _{d}(x)}{{\left}^{\frac {1}{K}}}}} where K {\displaystyle K} is a predefined constant like in the classical navigation function, which ensures the Morse nature of the function. γ d ( x ) {\displaystyle \gamma _{d}(x)} is the distance to the target position | | x x d | | 2 {\displaystyle {||x-{x_{d}}|{|^{2}}}} , and β ( x ) {\displaystyle \beta \left(x\right)} takes into account all obstacles, defined as β ( x ) = i = 0 N o β i ( x ) {\displaystyle \beta \left(x\right)=\prod \limits _{i=0}^{N_{o}}{{\beta _{i}}\left(x\right)}} where β i ( x ) {\displaystyle \beta _{i}(x)} is based on the probability for a collision at location x {\displaystyle x} . The probability for a collision is limited by a predetermined value Δ {\displaystyle \Delta } , meaning: β i ( x ) = Δ p i ( x ) {\displaystyle \beta _{i}(x)=\Delta -p^{i}\left(x\right)} and, β 0 ( x ) = Δ + p 0 ( x ) {\displaystyle \beta _{0}(x)=-\Delta +p^{0}\left(x\right)}

where p i ( x ) {\displaystyle p^{i}(x)} is the probability to collide with the i-th obstacle. A map φ {\displaystyle \varphi } is said to be a probabilistic navigation function if it satisfies the following conditions:

  1. It is a navigation function.
  2. The probability for a collision is bounded by a predefined probability Δ {\displaystyle \Delta } .

Navigation Function in Optimal Control

While for certain applications, it suffices to have a feasible navigation function, in many cases it is desirable to have an optimal navigation function with respect to a given cost functional J {\displaystyle J} . Formalized as an optimal control problem, we can write

minimize  J ( x 1 : T , u 1 : T ) = T L ( x t , u t , t ) d t {\displaystyle {\text{minimize }}J(x_{1:T},u_{1:T})=\int \limits _{T}L(x_{t},u_{t},t)dt}
subject to  x t ˙ = f ( x t , u t ) {\displaystyle {\text{subject to }}{\dot {x_{t}}}=f(x_{t},u_{t})}

whereby x {\displaystyle x} is the state, u {\displaystyle u} is the control to apply, L {\displaystyle L} is a cost at a certain state x {\displaystyle x} if we apply a control u {\displaystyle u} , and f {\displaystyle f} models the transition dynamics of the system.

Applying Bellman's principle of optimality the optimal cost-to-go function is defined as

ϕ ( x t ) = min u t U ( x t ) { L ( x t , u t ) + ϕ ( f ( x t , u t ) ) } {\displaystyle \displaystyle \phi (x_{t})=\min _{u_{t}\in U(x_{t})}{\Big \{}L(x_{t},u_{t})+\phi (f(x_{t},u_{t})){\Big \}}}

Together with the above defined axioms we can define the optimal navigation function as

  1. ϕ ( x ) = 0   x X g {\displaystyle \phi (x)=0\ \forall x\in X_{g}}
  2. ϕ ( x ) = {\displaystyle \phi (x)=\infty } if and only if no point in X G {\displaystyle {X_{G}}} is reachable from x {\displaystyle x} .
  3. For every reachable state, x X X G {\displaystyle x\in X\setminus {X_{G}}} , the local operator produces a state x {\displaystyle x'} for which ϕ ( x ) < ϕ ( x ) {\displaystyle \phi (x')<\phi (x)} .
  4. ϕ ( x t ) = min u t U ( x t ) { L ( x t , u t ) + ϕ ( f ( x t , u t ) ) } {\displaystyle \displaystyle \phi (x_{t})=\min _{u_{t}\in U(x_{t})}{\Big \{}L(x_{t},u_{t})+\phi (f(x_{t},u_{t})){\Big \}}}

Even if a navigation function is an example for reactive control, it can be utilized for optimal control problems as well which includes planning capabilities.

Stochastic Navigation Function

If we assume the transition dynamics of the system or the cost function as subjected to noise, we obtain a stochastic optimal control problem with a cost J ( x t , u t ) {\displaystyle J(x_{t},u_{t})} and dynamics f {\displaystyle f} . In the field of reinforcement learning the cost is replaced by a reward function R ( x t , u t ) {\displaystyle R(x_{t},u_{t})} and the dynamics by the transition probabilities P ( x t + 1 | x t , u t ) {\displaystyle P(x_{t+1}|x_{t},u_{t})} .

See also

References

  1. Lavalle, Steven, Planning Algorithms Chapter 8
  2. Hacohen, Shlomi; Shoval, Shraga; Shvalb, Nir (2019). "Probability Navigation Function for Stochastic Static Environments". International Journal of Control, Automation and Systems. 17 (8): 2097–2113(2019). doi:10.1007/s12555-018-0563-2. S2CID 164509949.
  3. Andrey V. Savkin; Alexey S. Matveev; Michael Hoy (25 September 2015). Safe Robot Navigation Among Moving and Steady Obstacles. Elsevier Science. pp. 47–. ISBN 978-0-12-803757-7.
Sources

External links

  • NFsim: MATLAB Toolbox for motion planning using Navigation Functions.
Category: