Misplaced Pages

Recursion

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.

This is an old revision of this page, as edited by 24.64.168.132 (talk) at 19:03, 28 July 2002. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Revision as of 19:03, 28 July 2002 by 24.64.168.132 (talk)(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)

Recursion is a way of specifying a process by means of itself. More precisely (and to dispel the appearance of circularity in the definition), "complicated" instances of the process are defined in terms of "simpler" instances, and the "simplest" instances are given explicitly.

Examples of mathematical objects often defined recursively are functions and sets.

The canonical example of a recursively defined function is the following definition of the factorial function:

0! = 1
n! = n · (n-1)!   for any natural number n > 0

Given this definition, we work out 3! as follows:

3! = 3 · (3-1)!
   = 3 · 2!
   = 3 · 2 · (2-1)!
   = 3 · 2 · 1! 
   = 3 · 2 · 1 · (1 - 1)!
   = 3 · 2 · 1 · 1
   = 6

Another example is the definition of Fibonacci numbers.

In set theory there is a theorem guaranteeing that such functions exist.

The recursion theorem. Given a set X, an element a of X and a function f:X->X, there is a unique function F:N->X such that

F(0) = a, and
F(n+1) = f(F(n))   for any natural number n > 0.

The canonical example of a recursively defined set is the natural numbers:

0 is in N
if n is in N, then n+1 is in N

The natural numbers can be defined as the smallest set satisfying the definition.

Another interesting example is the set of all true propositions in an axiomatic system.

if a proposition is an axiom, it is true.
if a proposition can be obtained from true propositions by means of inference rules, it is true.

Here is another, perhaps simpler way to understand recursive processes:

  1. Are we done yet? If so, return the results.
  2. If not, simplify the problem and send it to 1.

A common method of simplification is to divide the problem into subproblems. Such a programming technique is called divide-et-impera or divide and conquer and is a fundamental part of dynamic programming.

Virtually all programming languages in use today allow the direct specification of recursive functions and procedures. When such a function is called, the computer keeps track of the various instances of the function by using a stack. Conversely, every recursive function can be transformed into an iterative function by using a stack.

Any function that can be evaluated by a computer can be expressed in terms of recursive functions, without use of iteration. Indeed, some languages designed for logic programming and functional programming provide recursion as the only means of repetition directly available to the programmer. Such languages generally make tail recursion as efficient as iteration, letting programmers express other repetition structures (such as Scheme's map and for) in terms of recursion.

Recursion is deeply embedded in the theory of computation, with the theoretical equivalence of recursive functions and Turing machines at the foundation of ideas about the universality of the modern computer.

See also: