Ilustrate the left-most derivation on a token stream

305 views Asked by At

I am trying to understand the left-most derivation in the context of LL parsing algorithm. This link explains it from the generative perspective. i.e. It shows how to follow left-most derivation to generate a specific token sequence from a set of rules.

But I am thinking about the opposite direction. Given a token stream and a set of grammar rules, how to find the proper steps to apply a set of rules by the left-most derivation?

Let's continue to use the following grammar from the aforementioned link:

enter image description here

And the given token sequence is: 1 2 3

One way is this:

1 2 3
-> D D D 
-> N D D (rewrite the *left-most* D to N according to the rule N->D.)
-> N D (rewrite the *left-most* N D to N according to the rule N->N D.)
-> N  (same as above.)

But there are other ways to apply the grammar rules:

1 2 3 -> D D D -> N D D -> N N D -> N N N

OR

1 2 3 -> D D D -> N D D -> N N D -> N N

But only the first derivation ends up in a single non-terminal.

As the token sequence length increase, there can be many more ways. I think to infer a proper deriving steps, 2 prerequisites are needed:

  • a starting/root rule
  • the token sequence

After giving these 2, what's the algorithm to find the deriving steps? Do we have to make the final result a single non-terminal?

1

There are 1 answers

7
rici On BEST ANSWER

The general process of LL parsing consists of repeatedly:

  • Predict the production for the top grammar symbol on the stack, if that symbol is a non-terminal, and replace that symbol with the right-hand side of the production.

  • Match the top grammar symbol on the stack with the next input symbol, discarding both of them.

The match action is unproblematic but the prediction might require an oracle. However, for the purposes of this explanation, the mechanism by which the prediction is made is irrelevant, provided that it works. For example, it might be that for some small integer k, every possible sequence of k input symbols is only consistent with at most one possible production, in which case you could use a look-up table. In that case, we say that the grammar is LL(k). But you could use any mechanism, including magic. It is only necessary that the prediction always be accurate.

At any step in this algorithm, the partially-derived string is the consumed input appended with the stack. Initially there is no consumed input and the stack consists solely of the start symbol, so that the the partially-derived string (which has had 0 derivations applied). Since the consumed input consists solely of terminals and the algorithm only ever modifies the top (first) element of the stack, it is clear that the series of partially-derived strings constitutes a leftmost derivation.

If the parse is successful, the entire input will be consumed and the stack will be empty, so the parse results in a leftmost derivation of the input from the start symbol.

Here's the complete parse for your example:

Consumed           Unconsumed   Partial      Production
Input      Stack   input        derivation   or other action
--------   -----   ----------   ----------   --------------- 
           N       1 2 3        N            N → N D
           N D     1 2 3        N D          N → N D
           N D D   1 2 3        N D D        N → D
           D D D   1 2 3        D D D        D → 1
           1 D D   1 2 3        1 D D        -- match --
1          D D       2 3        1 D D        D → 2
1          2 D       2 3        1 2 D        -- match --
1 2        D           3        1 2 D        D → 3
1 2        3           3        1 2 3        -- match --
1 2 3      --         --        1 2 3        -- success --

If you read the last two columns, you can see the derivation process starting from N and ending with 1 2 3. In this example, the prediction can only be made using magic because the rule N → N D is not LL(k) for any k; using the right-recursive rule N → D N instead would allow an LL(2) decision procedure (for example,"use N → D N if there are at least two unconsumed input tokens; otherwise N → D".)

The chart you are trying to produce, which starts with 1 2 3 and ends with N is a bottom-up parse. Bottom-up parses using the LR algorithm correspond to rightmost derivations, but the derivation needs to be read backwards, since it ends with the start symbol.