Misplaced Pages

Left recursion

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
Theory of computer sciences
This article may be too technical for most readers to understand. Please help improve it to make it understandable to non-experts, without removing the technical details. (November 2015) (Learn how and when to remove this message)

In the formal language theory of computer science, left recursion is a special case of recursion where a string is recognized as part of a language by the fact that it decomposes into a string from that same language (on the left) and a suffix (on the right). For instance, 1 + 2 + 3 {\displaystyle 1+2+3} can be recognized as a sum because it can be broken into 1 + 2 {\displaystyle 1+2} , also a sum, and + 3 {\displaystyle {}+3} , a suitable suffix.

In terms of context-free grammar, a nonterminal is left-recursive if the leftmost symbol in one of its productions is itself (in the case of direct left recursion) or can be made itself by some sequence of substitutions (in the case of indirect left recursion).

Definition

A grammar is left-recursive if and only if there exists a nonterminal symbol A {\displaystyle A} that can derive to a sentential form with itself as the leftmost symbol. Symbolically,

A + A α {\displaystyle A\Rightarrow ^{+}A\alpha } ,

where + {\displaystyle \Rightarrow ^{+}} indicates the operation of making one or more substitutions, and α {\displaystyle \alpha } is any sequence of terminal and nonterminal symbols.

Direct left recursion

Direct left recursion occurs when the definition can be satisfied with only one substitution. It requires a rule of the form

A A α {\displaystyle A\to A\alpha }

where α {\displaystyle \alpha } is a sequence of nonterminals and terminals . For example, the rule

E x p r e s s i o n E x p r e s s i o n + T e r m {\displaystyle {\mathit {Expression}}\to {\mathit {Expression}}+{\mathit {Term}}}

is directly left-recursive. A left-to-right recursive descent parser for this rule might look like

void Expression() {
  Expression();
  match('+');
  Term();
}

and such code would fall into infinite recursion when executed.

Indirect left recursion

Indirect left recursion occurs when the definition of left recursion is satisfied via several substitutions. It entails a set of rules following the pattern

A 0 β 0 A 1 α 0 {\displaystyle A_{0}\to \beta _{0}A_{1}\alpha _{0}}
A 1 β 1 A 2 α 1 {\displaystyle A_{1}\to \beta _{1}A_{2}\alpha _{1}}
{\displaystyle \cdots }
A n β n A 0 α n {\displaystyle A_{n}\to \beta _{n}A_{0}\alpha _{n}}

where β 0 , β 1 , , β n {\displaystyle \beta _{0},\beta _{1},\ldots ,\beta _{n}} are sequences that can each yield the empty string, while α 0 , α 1 , , α n {\displaystyle \alpha _{0},\alpha _{1},\ldots ,\alpha _{n}} may be any sequences of terminal and nonterminal symbols at all. Note that these sequences may be empty. The derivation

A 0 β 0 A 1 α 0 + A 1 α 0 β 1 A 2 α 1 α 0 + + A 0 α n α 1 α 0 {\displaystyle A_{0}\Rightarrow \beta _{0}A_{1}\alpha _{0}\Rightarrow ^{+}A_{1}\alpha _{0}\Rightarrow \beta _{1}A_{2}\alpha _{1}\alpha _{0}\Rightarrow ^{+}\cdots \Rightarrow ^{+}A_{0}\alpha _{n}\dots \alpha _{1}\alpha _{0}}

then gives A 0 {\displaystyle A_{0}} as leftmost in its final sentential form.

Uses

Left recursion is commonly used as an idiom for making operations left-associative: that an expression a+b-c-d+e is evaluated as (((a+b)-c)-d)+e. In this case, that evaluation order could be achieved as a matter of syntax via the three grammatical rules

E x p r e s s i o n T e r m {\displaystyle {\mathit {Expression}}\to {\mathit {Term}}}
E x p r e s s i o n E x p r e s s i o n + T e r m {\displaystyle {\mathit {Expression}}\to {\mathit {Expression}}+{\mathit {Term}}}
E x p r e s s i o n E x p r e s s i o n T e r m {\displaystyle {\mathit {Expression}}\to {\mathit {Expression}}-{\mathit {Term}}}

These only allow parsing the E x p r e s s i o n {\displaystyle {\mathit {Expression}}} a+b-c-d+e as consisting of the E x p r e s s i o n {\displaystyle {\mathit {Expression}}} a+b-c-d and T e r m {\displaystyle {\mathit {Term}}} e, where a+b-c-d in turn consists of the E x p r e s s i o n {\displaystyle {\mathit {Expression}}} a+b-c and T e r m {\displaystyle {\mathit {Term}}} d, while a+b-c consists of the E x p r e s s i o n {\displaystyle {\mathit {Expression}}} a+b and T e r m {\displaystyle {\mathit {Term}}} c, etc.

Removing left recursion

Left recursion often poses problems for parsers, either because it leads them into infinite recursion (as in the case of most top-down parsers) or because they expect rules in a normal form that forbids it (as in the case of many bottom-up parsers). Therefore, a grammar is often preprocessed to eliminate the left recursion.

Removing direct left recursion

The general algorithm to remove direct left recursion follows. Several improvements to this method have been made. For a left-recursive nonterminal A {\displaystyle A} , discard any rules of the form A A {\displaystyle A\rightarrow A} and consider those that remain:

A A α 1 A α n β 1 β m {\displaystyle A\rightarrow A\alpha _{1}\mid \ldots \mid A\alpha _{n}\mid \beta _{1}\mid \ldots \mid \beta _{m}}

where:

  • each α {\displaystyle \alpha } is a nonempty sequence of nonterminals and terminals, and
  • each β {\displaystyle \beta } is a sequence of nonterminals and terminals that does not start with A {\displaystyle A} .

Replace these with two sets of productions, one set for A {\displaystyle A} :

A β 1 A β m A {\displaystyle A\rightarrow \beta _{1}A^{\prime }\mid \ldots \mid \beta _{m}A^{\prime }}

and another set for the fresh nonterminal A {\displaystyle A'} (often called the "tail" or the "rest"):

A α 1 A α n A ϵ {\displaystyle A^{\prime }\rightarrow \alpha _{1}A^{\prime }\mid \ldots \mid \alpha _{n}A^{\prime }\mid \epsilon }

Repeat this process until no direct left recursion remains.

As an example, consider the rule set

E x p r e s s i o n E x p r e s s i o n + E x p r e s s i o n I n t e g e r S t r i n g {\displaystyle {\mathit {Expression}}\rightarrow {\mathit {Expression}}+{\mathit {Expression}}\mid {\mathit {Integer}}\mid {\mathit {String}}}

This could be rewritten to avoid left recursion as

E x p r e s s i o n I n t e g e r E x p r e s s i o n S t r i n g E x p r e s s i o n {\displaystyle {\mathit {Expression}}\rightarrow {\mathit {Integer}}\,{\mathit {Expression}}'\mid {\mathit {String}}\,{\mathit {Expression}}'}
E x p r e s s i o n + E x p r e s s i o n E x p r e s s i o n ϵ {\displaystyle {\mathit {Expression}}'\rightarrow {}+{\mathit {Expression}}\,{\mathit {Expression}}'\mid \epsilon }

Removing all left recursion

The above process can be extended to eliminate all left recursion, by first converting indirect left recursion to direct left recursion on the highest numbered nonterminal in a cycle.

Inputs A grammar: a set of nonterminals A 1 , , A n {\displaystyle A_{1},\ldots ,A_{n}} and their productions
Output A modified grammar generating the same language but without left recursion
  1. For each nonterminal A i {\displaystyle A_{i}} :
    1. Repeat until an iteration leaves the grammar unchanged:
      1. For each rule A i α i {\displaystyle A_{i}\rightarrow \alpha _{i}} , α i {\displaystyle \alpha _{i}} being a sequence of terminals and nonterminals:
        1. If α i {\displaystyle \alpha _{i}} begins with a nonterminal A j {\displaystyle A_{j}} and j < i {\displaystyle j<i} :
          1. Let β i {\displaystyle \beta _{i}} be α i {\displaystyle \alpha _{i}} without its leading A j {\displaystyle A_{j}} .
          2. Remove the rule A i α i {\displaystyle A_{i}\rightarrow \alpha _{i}} .
          3. For each rule A j α j {\displaystyle A_{j}\rightarrow \alpha _{j}} :
            1. Add the rule A i α j β i {\displaystyle A_{i}\rightarrow \alpha _{j}\beta _{i}} .
    2. Remove direct left recursion for A i {\displaystyle A_{i}} as described above.

Step 1.1.1 amounts to expanding the initial nonterminal A j {\displaystyle A_{j}} in the right hand side of some rule A i A j β {\displaystyle A_{i}\to A_{j}\beta } , but only if j < i {\displaystyle j<i} . If A i A j β {\displaystyle A_{i}\to A_{j}\beta } was one step in a cycle of productions giving rise to a left recursion, then this has shortened that cycle by one step, but often at the price of increasing the number of rules.

The algorithm may be viewed as establishing a topological ordering on nonterminals: afterwards there can only be a rule A i A j β {\displaystyle A_{i}\to A_{j}\beta } if j > i {\displaystyle j>i} . Note that this algorithm is highly sensitive to the nonterminal ordering; optimizations often focus on choosing this ordering well.

Pitfalls

Although the above transformations preserve the language generated by a grammar, they may change the parse trees that witness strings' recognition. With suitable bookkeeping, tree rewriting can recover the originals, but if this step is omitted, the differences may change the semantics of a parse.

Associativity is particularly vulnerable; left-associative operators typically appear in right-associative-like arrangements under the new grammar. For example, starting with this grammar:

E x p r e s s i o n E x p r e s s i o n T e r m T e r m {\displaystyle {\mathit {Expression}}\rightarrow {\mathit {Expression}}\,-\,{\mathit {Term}}\mid {\mathit {Term}}}
T e r m T e r m F a c t o r F a c t o r {\displaystyle {\mathit {Term}}\rightarrow {\mathit {Term}}\,*\,{\mathit {Factor}}\mid {\mathit {Factor}}}
F a c t o r ( E x p r e s s i o n ) I n t e g e r {\displaystyle {\mathit {Factor}}\rightarrow ({\mathit {Expression}})\mid {\mathit {Integer}}}

the standard transformations to remove left recursion yield the following:

E x p r e s s i o n T e r m   E x p r e s s i o n {\displaystyle {\mathit {Expression}}\rightarrow {\mathit {Term}}\ {\mathit {Expression}}'}
E x p r e s s i o n T e r m   E x p r e s s i o n ϵ {\displaystyle {\mathit {Expression}}'\rightarrow {}-{\mathit {Term}}\ {\mathit {Expression}}'\mid \epsilon }
T e r m F a c t o r   T e r m {\displaystyle {\mathit {Term}}\rightarrow {\mathit {Factor}}\ {\mathit {Term}}'}
T e r m F a c t o r   T e r m ϵ {\displaystyle {\mathit {Term}}'\rightarrow {}*{\mathit {Factor}}\ {\mathit {Term}}'\mid \epsilon }
F a c t o r ( E x p r e s s i o n ) I n t e g e r {\displaystyle {\mathit {Factor}}\rightarrow ({\mathit {Expression}})\mid {\mathit {Integer}}}

Parsing the string "1 - 2 - 3" with the first grammar in an LALR parser (which can handle left-recursive grammars) would have resulted in the parse tree:

Left-recursive parsing of a double subtraction
Left-recursive parsing of a double subtraction

This parse tree groups the terms on the left, giving the correct semantics (1 - 2) - 3.

Parsing with the second grammar gives

Right-recursive parsing of a double subtraction
Right-recursive parsing of a double subtraction

which, properly interpreted, signifies 1 + (-2 + (-3)), also correct, but less faithful to the input and much harder to implement for some operators. Notice how terms to the right appear deeper in the tree, much as a right-recursive grammar would arrange them for 1 - (2 - 3).

Accommodating left recursion in top-down parsing

A formal grammar that contains left recursion cannot be parsed by a LL(k)-parser or other naive recursive descent parser unless it is converted to a weakly equivalent right-recursive form. In contrast, left recursion is preferred for LALR parsers because it results in lower stack usage than right recursion. However, more sophisticated top-down parsers can implement general context-free grammars by use of curtailment. In 2006, Frost and Hafiz described an algorithm which accommodates ambiguous grammars with direct left-recursive production rules. That algorithm was extended to a complete parsing algorithm to accommodate indirect as well as direct left recursion in polynomial time, and to generate compact polynomial-size representations of the potentially exponential number of parse trees for highly ambiguous grammars by Frost, Hafiz and Callaghan in 2007. The authors then implemented the algorithm as a set of parser combinators written in the Haskell programming language.

See also

References

  1. Notes on Formal Language Theory and Parsing at the Wayback Machine (archived 2007-11-27). James Power, Department of Computer Science National University of Ireland, Maynooth Maynooth, Co. Kildare, Ireland.JPR02
  2. Moore, Robert C. (May 2000). "Removing Left Recursion from Context-Free Grammars" (PDF). 6th Applied Natural Language Processing Conference: 249–255.
  3. Frost, R.; R. Hafiz (2006). "A New Top-Down Parsing Algorithm to Accommodate Ambiguity and Left Recursion in Polynomial Time". ACM SIGPLAN Notices. 41 (5): 46–54. doi:10.1145/1149982.1149988. S2CID 8006549., available from the author at http://hafiz.myweb.cs.uwindsor.ca/pub/p46-frost.pdf Archived 2015-01-08 at the Wayback Machine
  4. Frost, R.; R. Hafiz; P. Callaghan (June 2007). "Modular and Efficient Top-Down Parsing for Ambiguous Left-Recursive Grammars" (PDF). 10th International Workshop on Parsing Technologies (IWPT), ACL-SIGPARSE: 109–120. Archived from the original (PDF) on 2011-05-27.
  5. Frost, R.; R. Hafiz; P. Callaghan (January 2008). "Parser Combinators for Ambiguous Left-Recursive Grammars". Practical Aspects of Declarative Languages (PDF). Lecture Notes in Computer Science. Vol. 4902. pp. 167–181. doi:10.1007/978-3-540-77442-6_12. ISBN 978-3-540-77441-9.

External links

Categories: