How does a shift reduce parser know what rule to apply?

338 views Asked by At

When writing a shift reduce parser, how does a shift reduce figure out what rule to apply efficiently? For example, if I have the following rules

S –> S + S 
S –> id

How would the parser quickly determine the rule to apply in the following parse stacks?

$ id        # id -> S
$ S         # shift
$ S +       # shift
$ S + id    # id -> S
$ S + S     # S + S -> S
$ S

All the examples I've seen just pull the correct rule out of nowhere, but what is the code behind choosing a rule? Pseudocode would be appreciated.

I've taken the examples from here, but pretty much any shift reduce parsing articles I find online just magically know what rule to use and don't show how to choose them.

1

There are 1 answers

0
rici On BEST ANSWER

The rule number is in the parsing table. In other words, it was precomputed when the parsing table was created.

An LR state is a set of LR items, where each item is a production and an index into the production, usually written with a •. When you take a transition from one state to the next one, you move the • one symbol to the right in all the qualifying items. For a shift action, an item qualifies if the symbol following the • is the token being shifted, and for a goto action, which happens at the end of a reduction, an item qualifies if the symbol following the • is the non-terminal which was just reduced.

Normally not all the items in a state qualify, unless there is just one item in the state. But it can happen that there are two or more qualifying items; that's an indication that the grammar probably wasn't LL. Anyway, it doesn't matter. The parser generator takes all the qualifying items and uses them to create a new state (or look up an already constructed state). Newly constructed states are completed by "ε-closure", which is a fancy way of saying that you add all the productions for each non-terminal which follows the • in the new state. (Recursively, which is why it's called a closure.)

When the parser reaches a state where the • is at the end of an item, it can reduce that particular item, which is precisely the production which will be reduced. Reducing an item basically means backing up the parser until you reach the beginning of the item's production, which 8s what the parser stack is used for: each stack entry is a transition, do as you pop the stack you move backwards in the parse history. Once you reach the beginning of the item, you must be in a state which has a goto action on the production's non-terminal. That must be the case because an item with the • at the beginning was added during ε-closure, which only happens when some item(s) in the state have their • before that non-terminal. Then you take the goto action, which registers the fact that an instance of that non-terminal has just been recognised, and continue from there. So there's no magic.

Each reducible item has a lookahead set, which was also computed during table construction, consisting of the possible tokens which might come next. If the actual next token --the lookahead token-- is in that set, the reduction is allowed to happen. If the lookahead token follows the • in the current state, a shift action is allowed. If a state has both a possible reduction action and a possible shift action on the same token, the table has a parsing conflict and the grammar is not LR. The same if two different items are both reducible on that state on the same lookahead. For a grammar to be LR, every state can have at most one possible action for every different lookahead token. (If it has no possible action for the current lookahead, the parse fails and a syntax error is reported.)

In my opinion, you can't really learn this algorithm by reading about, although I've tried to write it. To see how it works, you need to construct (or borrow) a parsing table and play parser, armed with a whiteboard or a big pad of paper to keep track of the parsing stack. If you can find (or build) a parsing table where the items have not been deleted, you might find it easier to follow, although it takes up a lot more space. (G2G, like many "tutorials", deleted the items, possibly making it look like magic. But there are other resources, such as the infamous Dragon Book.)

The parser itself doesn't need to look at the items; all the relevant information has been summarised in the parsing table, which I suppose is why sites like G2G don't show them. And they do create a lot of clutter. Bison can produce Graphview source for an image of the parsing automaton; you need to supply the --report=all command-line option if you want to see the ε-closure in each state.