The following is simply a conveniently formatted list of learning goals. More likely than not, some might be hard to assess through a quiz or exam. Students should prioritize their studying based on:

Assessable Learning Goals covered for Quiz 2 Learning Goals

Be able to write a non-trivial grammar with recursive rules.

Determine what terminals are in the first set of a grammar sequence.

Determine what terminals are in the follow set of a grammar symbol.

Determine whether a "symbol derives to λ".

What are terminals, non-terminals, λ, and $ in a CFG?

What is a parse tree, what characteristics does it have?

Be able to eliminate left recursion from a grammar in order to make it LL(1)

Be able to factor out common prefixes in order to make an LL(1) grammar.

Given a grammar and its LL(1) parsing table, show how a parse tree is created from a token stream.

How are top-down parsers that don't use LL tables constructed?

How do you build an LL(1) parsing table from a grammar that is LL(1)?

How is a predict set calculated from first sets, follow sets, and "derives to λ"?

How is a predict set calculated from first sets, follow sets, and "derives to λ"?

What do the two Ls and the k stand for in "LL(k)" parsing?

What does a production rule written with left recursion look like?

What is a predict set?

How are Item Follow Sets formed?

If a LR(k) parsing table can be formed from a grammar G that generates or defines language L, what ambiguity conclusion can be made about L?

In terms of deciding which production rule to use, what is the fundamental advantage of LR parsers over LL parsers?

Know what an Item Set and its Progress Marker represent.

Understand the shift-reduce parsing algorithm (given a LR parsing table or a simple grammar).

What do to the L, R, and k stand for in LR(k) parsing?

What is a reduce-reduce conflict in an LR table construction algorithm?

What is a shift-reduce conflict in LR table construction?

What type of derivation do "bottom-up" parsers use (generate)?

Be able to calculate Closure and Goto for LR Item Sets.

What important property do LR parsers have with respect to the DBL language?

How are Follow Sets and the grammar's Item Set graph (CFSM) used to construct SLR parsing tables?

What property of it's predict sets make a grammar not LL(1)?

How are left recursive rules refactored so that a grammar might be processed with an LL(1) parser?

How are grammars with common prefixes that create intersecting predict sets refactored to become LL(1) languages?

Be able to write a non-trivial LL(1) language with recursive rules and operators (precedence and associativity).