New Upstream Release - golang-github-pointlander-peg

Ready changes

Summary

Merged new upstream version: 1.0.1 (was: 1.0.0).

Resulting package

Built on 2022-03-14T03:01 (took 2m22s)

The resulting binary packages can be installed (if you have the apt repository enabled) by running one of:

apt install -t fresh-releases golang-github-pointlander-peg-devapt install -t fresh-releases peg-go

Lintian Result

Diff

diff --git a/.github/workflows/go.yml b/.github/workflows/go.yml
new file mode 100644
index 0000000..74067f9
--- /dev/null
+++ b/.github/workflows/go.yml
@@ -0,0 +1,26 @@
+name: Go
+
+on:
+  push:
+    branches: [ master ]
+  pull_request:
+    branches: [ master ]
+
+jobs:
+
+  build:
+    name: Build
+    runs-on: ubuntu-latest
+    steps:
+
+    - name: Set up Go 1.x
+      uses: actions/setup-go@v2
+      with:
+        go-version: ^1.13
+      id: go
+
+    - name: Check out code into the Go module directory
+      uses: actions/checkout@v2
+
+    - name: Build and Test
+      run: go run build.go test
diff --git a/.gitignore b/.gitignore
index 4d01630..4d4cda8 100644
--- a/.gitignore
+++ b/.gitignore
@@ -1,7 +1,9 @@
 *.peg.go
-!bootstrap.peg.go
+!peg.peg.go
 *.exe
 *.6
 peg
 calculator/calculator
 bootstrap/bootstrap
+cmd/peg-bootstrap/peg-bootstrap
+cmd/peg-bootstrap/peg*
diff --git a/LINKS.md b/LINKS.md
index 2f38c67..a336658 100644
--- a/LINKS.md
+++ b/LINKS.md
@@ -1 +1,19 @@
 https://medium.com/@octskyward/graal-truffle-134d8f28fb69#.jo3luf4dn
+http://nez-peg.github.io/
+https://en.wikipedia.org/wiki/DFA_minimization
+
+https://news.ycombinator.com/item?id=14589173
+http://jamey.thesharps.us/2017/06/search-based-compiler-code-generation.html
+
+https://news.ycombinator.com/item?id=15105119  
+https://en.wikipedia.org/wiki/Tree_transducer  
+
+# Type-Driven Program Synthesis
+https://news.ycombinator.com/item?id=18251145  
+https://www.youtube.com/watch?v=HnOix9TFy1A  
+http://comcom.csail.mit.edu/comcom/#welcome  
+https://bitbucket.org/nadiapolikarpova/synquid  
+
+# Formality – An efficient programming language and proof assistant
+https://news.ycombinator.com/item?id=18230148  
+https://github.com/maiavictor/formality  
diff --git a/Makefile b/Makefile
deleted file mode 100644
index 48fe55c..0000000
--- a/Makefile
+++ /dev/null
@@ -1,13 +0,0 @@
-# Copyright 2010 The Go Authors. All rights reserved.
-# Use of this source code is governed by a BSD-style
-# license that can be found in the LICENSE file.
-
-peg: bootstrap.peg.go peg.go main.go
-	go build
-
-bootstrap.peg.go: bootstrap/main.go peg.go
-	cd bootstrap; go build
-	bootstrap/bootstrap
-
-clean:
-	rm -f bootstrap/bootstrap peg peg.peg.go
diff --git a/README.md b/README.md
index 7833104..ea2ce7e 100644
--- a/README.md
+++ b/README.md
@@ -1,148 +1,236 @@
-# About
+# PEG, an Implementation of a Packrat Parsing Expression Grammar in Go
 
-Peg, Parsing Expression Grammar, is an implementation of a Packrat parser
-generator. A Packrat parser is a descent recursive parser capable of
-backtracking. The generated parser searches for the correct parsing of the
-input.
+[![GoDoc](https://godoc.org/github.com/pointlander/peg?status.svg)](https://godoc.org/github.com/pointlander/peg)
+[![Go Report Card](https://goreportcard.com/badge/github.com/pointlander/peg)](https://goreportcard.com/report/github.com/pointlander/peg)
+[![Coverage](https://gocover.io/_badge/github.com/pointlander/peg)](https://gocover.io/github.com/pointlander/peg)
 
-For more information see:
-* http://en.wikipedia.org/wiki/Parsing_expression_grammar
-* http://pdos.csail.mit.edu/~baford/packrat/
+A [Parsing Expression Grammar](http://en.wikipedia.org/wiki/Parsing_expression_grammar) ( hence `peg`) is a way to create grammars similar in principle to [regular expressions](https://en.wikipedia.org/wiki/Regular_expression) but which allow better code integration. Specifically, `peg` is an implementation of the [Packrat](https://en.wikipedia.org/wiki/Parsing_expression_grammar#Implementing_parsers_from_parsing_expression_grammars) parser generator originally implemented as [peg/leg](https://www.piumarta.com/software/peg/) by [Ian Piumarta](https://www.piumarta.com/cv/) in C. A Packrat parser is a "descent recursive parser" capable of backtracking and negative look-ahead assertions which are problematic for regular expression engines . 
 
-This Go implementation is based on:
-* http://piumarta.com/software/peg/
+## See Also
 
+* <http://en.wikipedia.org/wiki/Parsing_expression_grammar>
+* <http://pdos.csail.mit.edu/~baford/packrat/>
+* <http://piumarta.com/software/peg/>
 
-# Usage
+## Installing
+
+`go get -u github.com/pointlander/peg`
+
+## Building
+
+### Using Pre-Generated Files
+
+`go install`
+
+### Generating Files Yourself
+You should only need to do this if you are contributing to the library, or if something gets messed up.
+
+`go run build.go` or `go generate`
+
+With tests:
+
+`go run build.go test`
+
+## Usage
+
+```
+peg [<option>]... <file>
+
+Usage of peg:
+  -inline
+      parse rule inlining
+  -noast
+      disable AST
+  -output string
+      specify name of output file
+  -print
+      directly dump the syntax tree
+  -strict
+      treat compiler warnings as errors
+  -switch
+      replace if-else if-else like blocks with switch blocks
+  -syntax
+      print out the syntax tree
+  -version
+      print the version and exit
+	  
+```
+
+
+## Sample Makefile
+
+This sample `Makefile` will convert any file ending with `.peg` into a `.go` file with the same name. Adjust as needed.
+
+```make
+.SUFFIXES: .peg .go
+
+.peg.go:
+	peg -noast -switch -inline -strict -output $@ $<
+
+all: grammar.go
+```
+
+Use caution when picking your names to avoid overwriting existing `.go` files. Since only one PEG grammar is allowed per Go package (currently) the use of the name `grammar.peg` is suggested as a convention:
 
 ```
--inline
- Tells the parser generator to inline parser rules.
--switch
- Reduces the number of rules that have to be tried for some pegs.
- If statements are replaced with switch statements.
+grammar.peg
+grammar.go
 ```
 
+## PEG File Syntax
 
-# Syntax
+First declare the package name and any import(s) required:
 
-First declare the package name:
 ```
 package <package name>
+
+import <import name>
 ```
 
 Then declare the parser:
+
 ```
 type <parser name> Peg {
 	<parser state variables>
 }
 ```
 
-Next declare the rules. The first rule is the entry point into the parser:
+Next declare the rules. Note that the main rules are described below but are based on the [peg/leg rules](https://www.piumarta.com/software/peg/peg.1.html) which provide additional documentation.
+
+The first rule is the entry point into the parser:
+
 ```
 <rule name> <- <rule body>
 ```
 
-The first rule should probably end with '!.' to indicate no more input follows:
+The first rule should probably end with `!.` to indicate no more input follows. 
+
 ```
 first <- . !.
 ```
 
-'.' means any character matches. For zero or more character matches use:
+This is often set to `END` to make PEG rules more readable:
+
+```
+END <- !.
+```
+
+`.` means any character matches. For zero or more character matches, use:
+
 ```
 repetition <- .*
 ```
 
-For one or more character matches use:
+For one or more character matches, use:
+
 ```
 oneOrMore <- .+
 ```
 
-For an optional character match use:
+For an optional character match, use:
+
 ```
 optional <- .?
 ```
 
-If specific characters are to be matched use single quotes:
+If specific characters are to be matched, use single quotes:
+
 ```
 specific <- 'a'* 'bc'+ 'de'?
 ```
-will match the string "aaabcbcde".
 
-For choosing between different inputs use alternates:
+This will match the string `"aaabcbcde"`.
+
+For choosing between different inputs, use alternates:
+
 ```
 prioritized <- 'a' 'a'* / 'bc'+ / 'de'?
 ```
-will match "aaaa" or "bcbc" or "de" or "". The matches are attempted in order.
 
-If the characters are case insensitive use double quotes:
+This will match `"aaaa"` or `"bcbc"` or `"de"` or `""`. The matches are attempted in order.
+
+If the characters are case insensitive, use double quotes:
+
 ```
 insensitive <- "abc"
 ```
-will match "abc" or "Abc" or "ABc" etc...
 
-For matching a set of characters use a character class:
+This will match `"abc"` or `"Abc"` or `"ABc"` and so on.
+
+For matching a set of characters, use a character class:
+
 ```
 class <- [a-z]
 ```
-will watch "a" or "b" or all the way to "z".
 
-For an inverse character class start with a tilde:
+This will match `"a"` or `"b"` or all the way to `"z"`.
+
+For an inverse character class, start with a caret:
+
 ```
-inverse <- [~a-z]
+inverse <- [^a-z]
 ```
-will match anything but "a" or "b" or all the way to "z"
 
-If the character class is case insensitive use double brackets:
+This will match anything but `"a"` or `"b"` or all the way to `"z"`.
+
+If the character class is case insensitive, use double brackets:
+
 ```
 insensitive <- [[A-Z]]
 ```
 
+(Note that this is not available in regular expression syntax.)
+
 Use parentheses for grouping:
+
 ```
 grouping <- (rule1 / rule2) rule3
 ```
 
-For looking ahead for a match (predicate) use:
+For looking ahead a match (predicate), use:
+
 ```
 lookAhead <- &rule1 rule2
 ```
 
-For inverse look ahead use:
+For inverse look ahead, use:
+
 ```
 inverse <- !rule1 rule2
 ```
 
 Use curly braces for Go code:
+
 ```
 gocode <- { fmt.Println("hello world") }
 ```
 
-For string captures use less than greater than:
+For string captures, use less than and greater than:
+
 ```
-capture <- <'capture'> { fmt.Println(buffer[begin:end]) }
+capture <- <'capture'> { fmt.Println(text) }
 ```
-Will print out "capture". The captured string is stored in buffer[begin:end].
 
+Will print out `"capture"`. The captured string is stored in `buffer[begin:end]`.
 
-# Files
+## Testing Complex Grammars
 
-* bootstrap/main.go: bootstrap syntax tree of peg
-* peg.go: syntax tree and code generator
-* main.go: bootstrap main
-* peg.peg: peg in its own language
+Testing a grammar usually requires more than the average unit testing with multiple inputs and outputs. Grammars are also usually not for just one language implementation. Consider maintaining a list of inputs with expected outputs in a structured file format such as JSON or YAML and parsing it for testing or using one of the available options for Go such as Rob Muhlestein's [`tinout`](https://github.com/robmuh/tinout) package.
 
+## Files
 
-# Testing
+* `bootstrap/main.go` - bootstrap syntax tree of peg
+* `tree/peg.go` - syntax tree and code generator
+* `peg.peg` - peg in its own language
 
-There should be no differences between the bootstrap and self compiled:
+## Author
 
-```
-./peg -inline -switch peg.peg
-diff bootstrap.peg.go peg.peg.go
-```
+Andrew Snodgrass
 
+## Projects That Use `peg`
 
-# Author
+Here are some projects that use `peg` to provide further examples of PEG grammars:
+
+* <https://github.com/tj/go-naturaldate> -  natural date/time parsing
+* <https://github.com/robmuh/dtime> - easy date/time formats with duration spans
 
-Andrew Snodgrass
diff --git a/bootstrap.peg.go b/bootstrap.peg.go
deleted file mode 100644
index cfc23a4..0000000
--- a/bootstrap.peg.go
+++ /dev/null
@@ -1,3040 +0,0 @@
-package main
-
-import (
-	"fmt"
-	"math"
-	"sort"
-	"strconv"
-)
-
-const endSymbol rune = 1114112
-
-/* The rule types inferred from the grammar are below. */
-type pegRule uint8
-
-const (
-	ruleUnknown pegRule = iota
-	ruleGrammar
-	ruleImport
-	ruleDefinition
-	ruleExpression
-	ruleSequence
-	rulePrefix
-	ruleSuffix
-	rulePrimary
-	ruleIdentifier
-	ruleIdentStart
-	ruleIdentCont
-	ruleLiteral
-	ruleClass
-	ruleRanges
-	ruleDoubleRanges
-	ruleRange
-	ruleDoubleRange
-	ruleChar
-	ruleDoubleChar
-	ruleEscape
-	ruleLeftArrow
-	ruleSlash
-	ruleAnd
-	ruleNot
-	ruleQuestion
-	ruleStar
-	rulePlus
-	ruleOpen
-	ruleClose
-	ruleDot
-	ruleSpaceComment
-	ruleSpacing
-	ruleMustSpacing
-	ruleComment
-	ruleSpace
-	ruleEndOfLine
-	ruleEndOfFile
-	ruleAction
-	ruleActionBody
-	ruleBegin
-	ruleEnd
-	ruleAction0
-	ruleAction1
-	ruleAction2
-	rulePegText
-	ruleAction3
-	ruleAction4
-	ruleAction5
-	ruleAction6
-	ruleAction7
-	ruleAction8
-	ruleAction9
-	ruleAction10
-	ruleAction11
-	ruleAction12
-	ruleAction13
-	ruleAction14
-	ruleAction15
-	ruleAction16
-	ruleAction17
-	ruleAction18
-	ruleAction19
-	ruleAction20
-	ruleAction21
-	ruleAction22
-	ruleAction23
-	ruleAction24
-	ruleAction25
-	ruleAction26
-	ruleAction27
-	ruleAction28
-	ruleAction29
-	ruleAction30
-	ruleAction31
-	ruleAction32
-	ruleAction33
-	ruleAction34
-	ruleAction35
-	ruleAction36
-	ruleAction37
-	ruleAction38
-	ruleAction39
-	ruleAction40
-	ruleAction41
-	ruleAction42
-	ruleAction43
-	ruleAction44
-	ruleAction45
-	ruleAction46
-	ruleAction47
-	ruleAction48
-
-	rulePre
-	ruleIn
-	ruleSuf
-)
-
-var rul3s = [...]string{
-	"Unknown",
-	"Grammar",
-	"Import",
-	"Definition",
-	"Expression",
-	"Sequence",
-	"Prefix",
-	"Suffix",
-	"Primary",
-	"Identifier",
-	"IdentStart",
-	"IdentCont",
-	"Literal",
-	"Class",
-	"Ranges",
-	"DoubleRanges",
-	"Range",
-	"DoubleRange",
-	"Char",
-	"DoubleChar",
-	"Escape",
-	"LeftArrow",
-	"Slash",
-	"And",
-	"Not",
-	"Question",
-	"Star",
-	"Plus",
-	"Open",
-	"Close",
-	"Dot",
-	"SpaceComment",
-	"Spacing",
-	"MustSpacing",
-	"Comment",
-	"Space",
-	"EndOfLine",
-	"EndOfFile",
-	"Action",
-	"ActionBody",
-	"Begin",
-	"End",
-	"Action0",
-	"Action1",
-	"Action2",
-	"PegText",
-	"Action3",
-	"Action4",
-	"Action5",
-	"Action6",
-	"Action7",
-	"Action8",
-	"Action9",
-	"Action10",
-	"Action11",
-	"Action12",
-	"Action13",
-	"Action14",
-	"Action15",
-	"Action16",
-	"Action17",
-	"Action18",
-	"Action19",
-	"Action20",
-	"Action21",
-	"Action22",
-	"Action23",
-	"Action24",
-	"Action25",
-	"Action26",
-	"Action27",
-	"Action28",
-	"Action29",
-	"Action30",
-	"Action31",
-	"Action32",
-	"Action33",
-	"Action34",
-	"Action35",
-	"Action36",
-	"Action37",
-	"Action38",
-	"Action39",
-	"Action40",
-	"Action41",
-	"Action42",
-	"Action43",
-	"Action44",
-	"Action45",
-	"Action46",
-	"Action47",
-	"Action48",
-
-	"Pre_",
-	"_In_",
-	"_Suf",
-}
-
-type node32 struct {
-	token32
-	up, next *node32
-}
-
-func (node *node32) print(depth int, buffer string) {
-	for node != nil {
-		for c := 0; c < depth; c++ {
-			fmt.Printf(" ")
-		}
-		fmt.Printf("\x1B[34m%v\x1B[m %v\n", rul3s[node.pegRule], strconv.Quote(string(([]rune(buffer)[node.begin:node.end]))))
-		if node.up != nil {
-			node.up.print(depth+1, buffer)
-		}
-		node = node.next
-	}
-}
-
-func (node *node32) Print(buffer string) {
-	node.print(0, buffer)
-}
-
-type element struct {
-	node *node32
-	down *element
-}
-
-/* ${@} bit structure for abstract syntax tree */
-type token32 struct {
-	pegRule
-	begin, end, next uint32
-}
-
-func (t *token32) isZero() bool {
-	return t.pegRule == ruleUnknown && t.begin == 0 && t.end == 0 && t.next == 0
-}
-
-func (t *token32) isParentOf(u token32) bool {
-	return t.begin <= u.begin && t.end >= u.end && t.next > u.next
-}
-
-func (t *token32) getToken32() token32 {
-	return token32{pegRule: t.pegRule, begin: uint32(t.begin), end: uint32(t.end), next: uint32(t.next)}
-}
-
-func (t *token32) String() string {
-	return fmt.Sprintf("\x1B[34m%v\x1B[m %v %v %v", rul3s[t.pegRule], t.begin, t.end, t.next)
-}
-
-type tokens32 struct {
-	tree    []token32
-	ordered [][]token32
-}
-
-func (t *tokens32) trim(length int) {
-	t.tree = t.tree[0:length]
-}
-
-func (t *tokens32) Print() {
-	for _, token := range t.tree {
-		fmt.Println(token.String())
-	}
-}
-
-func (t *tokens32) Order() [][]token32 {
-	if t.ordered != nil {
-		return t.ordered
-	}
-
-	depths := make([]int32, 1, math.MaxInt16)
-	for i, token := range t.tree {
-		if token.pegRule == ruleUnknown {
-			t.tree = t.tree[:i]
-			break
-		}
-		depth := int(token.next)
-		if length := len(depths); depth >= length {
-			depths = depths[:depth+1]
-		}
-		depths[depth]++
-	}
-	depths = append(depths, 0)
-
-	ordered, pool := make([][]token32, len(depths)), make([]token32, len(t.tree)+len(depths))
-	for i, depth := range depths {
-		depth++
-		ordered[i], pool, depths[i] = pool[:depth], pool[depth:], 0
-	}
-
-	for i, token := range t.tree {
-		depth := token.next
-		token.next = uint32(i)
-		ordered[depth][depths[depth]] = token
-		depths[depth]++
-	}
-	t.ordered = ordered
-	return ordered
-}
-
-type state32 struct {
-	token32
-	depths []int32
-	leaf   bool
-}
-
-func (t *tokens32) AST() *node32 {
-	tokens := t.Tokens()
-	stack := &element{node: &node32{token32: <-tokens}}
-	for token := range tokens {
-		if token.begin == token.end {
-			continue
-		}
-		node := &node32{token32: token}
-		for stack != nil && stack.node.begin >= token.begin && stack.node.end <= token.end {
-			stack.node.next = node.up
-			node.up = stack.node
-			stack = stack.down
-		}
-		stack = &element{node: node, down: stack}
-	}
-	return stack.node
-}
-
-func (t *tokens32) PreOrder() (<-chan state32, [][]token32) {
-	s, ordered := make(chan state32, 6), t.Order()
-	go func() {
-		var states [8]state32
-		for i := range states {
-			states[i].depths = make([]int32, len(ordered))
-		}
-		depths, state, depth := make([]int32, len(ordered)), 0, 1
-		write := func(t token32, leaf bool) {
-			S := states[state]
-			state, S.pegRule, S.begin, S.end, S.next, S.leaf = (state+1)%8, t.pegRule, t.begin, t.end, uint32(depth), leaf
-			copy(S.depths, depths)
-			s <- S
-		}
-
-		states[state].token32 = ordered[0][0]
-		depths[0]++
-		state++
-		a, b := ordered[depth-1][depths[depth-1]-1], ordered[depth][depths[depth]]
-	depthFirstSearch:
-		for {
-			for {
-				if i := depths[depth]; i > 0 {
-					if c, j := ordered[depth][i-1], depths[depth-1]; a.isParentOf(c) &&
-						(j < 2 || !ordered[depth-1][j-2].isParentOf(c)) {
-						if c.end != b.begin {
-							write(token32{pegRule: ruleIn, begin: c.end, end: b.begin}, true)
-						}
-						break
-					}
-				}
-
-				if a.begin < b.begin {
-					write(token32{pegRule: rulePre, begin: a.begin, end: b.begin}, true)
-				}
-				break
-			}
-
-			next := depth + 1
-			if c := ordered[next][depths[next]]; c.pegRule != ruleUnknown && b.isParentOf(c) {
-				write(b, false)
-				depths[depth]++
-				depth, a, b = next, b, c
-				continue
-			}
-
-			write(b, true)
-			depths[depth]++
-			c, parent := ordered[depth][depths[depth]], true
-			for {
-				if c.pegRule != ruleUnknown && a.isParentOf(c) {
-					b = c
-					continue depthFirstSearch
-				} else if parent && b.end != a.end {
-					write(token32{pegRule: ruleSuf, begin: b.end, end: a.end}, true)
-				}
-
-				depth--
-				if depth > 0 {
-					a, b, c = ordered[depth-1][depths[depth-1]-1], a, ordered[depth][depths[depth]]
-					parent = a.isParentOf(b)
-					continue
-				}
-
-				break depthFirstSearch
-			}
-		}
-
-		close(s)
-	}()
-	return s, ordered
-}
-
-func (t *tokens32) PrintSyntax() {
-	tokens, ordered := t.PreOrder()
-	max := -1
-	for token := range tokens {
-		if !token.leaf {
-			fmt.Printf("%v", token.begin)
-			for i, leaf, depths := 0, int(token.next), token.depths; i < leaf; i++ {
-				fmt.Printf(" \x1B[36m%v\x1B[m", rul3s[ordered[i][depths[i]-1].pegRule])
-			}
-			fmt.Printf(" \x1B[36m%v\x1B[m\n", rul3s[token.pegRule])
-		} else if token.begin == token.end {
-			fmt.Printf("%v", token.begin)
-			for i, leaf, depths := 0, int(token.next), token.depths; i < leaf; i++ {
-				fmt.Printf(" \x1B[31m%v\x1B[m", rul3s[ordered[i][depths[i]-1].pegRule])
-			}
-			fmt.Printf(" \x1B[31m%v\x1B[m\n", rul3s[token.pegRule])
-		} else {
-			for c, end := token.begin, token.end; c < end; c++ {
-				if i := int(c); max+1 < i {
-					for j := max; j < i; j++ {
-						fmt.Printf("skip %v %v\n", j, token.String())
-					}
-					max = i
-				} else if i := int(c); i <= max {
-					for j := i; j <= max; j++ {
-						fmt.Printf("dupe %v %v\n", j, token.String())
-					}
-				} else {
-					max = int(c)
-				}
-				fmt.Printf("%v", c)
-				for i, leaf, depths := 0, int(token.next), token.depths; i < leaf; i++ {
-					fmt.Printf(" \x1B[34m%v\x1B[m", rul3s[ordered[i][depths[i]-1].pegRule])
-				}
-				fmt.Printf(" \x1B[34m%v\x1B[m\n", rul3s[token.pegRule])
-			}
-			fmt.Printf("\n")
-		}
-	}
-}
-
-func (t *tokens32) PrintSyntaxTree(buffer string) {
-	tokens, _ := t.PreOrder()
-	for token := range tokens {
-		for c := 0; c < int(token.next); c++ {
-			fmt.Printf(" ")
-		}
-		fmt.Printf("\x1B[34m%v\x1B[m %v\n", rul3s[token.pegRule], strconv.Quote(string(([]rune(buffer)[token.begin:token.end]))))
-	}
-}
-
-func (t *tokens32) Add(rule pegRule, begin, end, depth uint32, index int) {
-	t.tree[index] = token32{pegRule: rule, begin: uint32(begin), end: uint32(end), next: uint32(depth)}
-}
-
-func (t *tokens32) Tokens() <-chan token32 {
-	s := make(chan token32, 16)
-	go func() {
-		for _, v := range t.tree {
-			s <- v.getToken32()
-		}
-		close(s)
-	}()
-	return s
-}
-
-func (t *tokens32) Error() []token32 {
-	ordered := t.Order()
-	length := len(ordered)
-	tokens, length := make([]token32, length), length-1
-	for i := range tokens {
-		o := ordered[length-i]
-		if len(o) > 1 {
-			tokens[i] = o[len(o)-2].getToken32()
-		}
-	}
-	return tokens
-}
-
-func (t *tokens32) Expand(index int) {
-	tree := t.tree
-	if index >= len(tree) {
-		expanded := make([]token32, 2*len(tree))
-		copy(expanded, tree)
-		t.tree = expanded
-	}
-}
-
-type Peg struct {
-	*Tree
-
-	Buffer string
-	buffer []rune
-	rules  [92]func() bool
-	Parse  func(rule ...int) error
-	Reset  func()
-	Pretty bool
-	tokens32
-}
-
-type textPosition struct {
-	line, symbol int
-}
-
-type textPositionMap map[int]textPosition
-
-func translatePositions(buffer []rune, positions []int) textPositionMap {
-	length, translations, j, line, symbol := len(positions), make(textPositionMap, len(positions)), 0, 1, 0
-	sort.Ints(positions)
-
-search:
-	for i, c := range buffer {
-		if c == '\n' {
-			line, symbol = line+1, 0
-		} else {
-			symbol++
-		}
-		if i == positions[j] {
-			translations[positions[j]] = textPosition{line, symbol}
-			for j++; j < length; j++ {
-				if i != positions[j] {
-					continue search
-				}
-			}
-			break search
-		}
-	}
-
-	return translations
-}
-
-type parseError struct {
-	p   *Peg
-	max token32
-}
-
-func (e *parseError) Error() string {
-	tokens, error := []token32{e.max}, "\n"
-	positions, p := make([]int, 2*len(tokens)), 0
-	for _, token := range tokens {
-		positions[p], p = int(token.begin), p+1
-		positions[p], p = int(token.end), p+1
-	}
-	translations := translatePositions(e.p.buffer, positions)
-	format := "parse error near %v (line %v symbol %v - line %v symbol %v):\n%v\n"
-	if e.p.Pretty {
-		format = "parse error near \x1B[34m%v\x1B[m (line %v symbol %v - line %v symbol %v):\n%v\n"
-	}
-	for _, token := range tokens {
-		begin, end := int(token.begin), int(token.end)
-		error += fmt.Sprintf(format,
-			rul3s[token.pegRule],
-			translations[begin].line, translations[begin].symbol,
-			translations[end].line, translations[end].symbol,
-			strconv.Quote(string(e.p.buffer[begin:end])))
-	}
-
-	return error
-}
-
-func (p *Peg) PrintSyntaxTree() {
-	p.tokens32.PrintSyntaxTree(p.Buffer)
-}
-
-func (p *Peg) Highlighter() {
-	p.PrintSyntax()
-}
-
-func (p *Peg) Execute() {
-	buffer, _buffer, text, begin, end := p.Buffer, p.buffer, "", 0, 0
-	for token := range p.Tokens() {
-		switch token.pegRule {
-
-		case rulePegText:
-			begin, end = int(token.begin), int(token.end)
-			text = string(_buffer[begin:end])
-
-		case ruleAction0:
-			p.AddPackage(text)
-		case ruleAction1:
-			p.AddPeg(text)
-		case ruleAction2:
-			p.AddState(text)
-		case ruleAction3:
-			p.AddImport(text)
-		case ruleAction4:
-			p.AddRule(text)
-		case ruleAction5:
-			p.AddExpression()
-		case ruleAction6:
-			p.AddAlternate()
-		case ruleAction7:
-			p.AddNil()
-			p.AddAlternate()
-		case ruleAction8:
-			p.AddNil()
-		case ruleAction9:
-			p.AddSequence()
-		case ruleAction10:
-			p.AddPredicate(text)
-		case ruleAction11:
-			p.AddStateChange(text)
-		case ruleAction12:
-			p.AddPeekFor()
-		case ruleAction13:
-			p.AddPeekNot()
-		case ruleAction14:
-			p.AddQuery()
-		case ruleAction15:
-			p.AddStar()
-		case ruleAction16:
-			p.AddPlus()
-		case ruleAction17:
-			p.AddName(text)
-		case ruleAction18:
-			p.AddDot()
-		case ruleAction19:
-			p.AddAction(text)
-		case ruleAction20:
-			p.AddPush()
-		case ruleAction21:
-			p.AddSequence()
-		case ruleAction22:
-			p.AddSequence()
-		case ruleAction23:
-			p.AddPeekNot()
-			p.AddDot()
-			p.AddSequence()
-		case ruleAction24:
-			p.AddPeekNot()
-			p.AddDot()
-			p.AddSequence()
-		case ruleAction25:
-			p.AddAlternate()
-		case ruleAction26:
-			p.AddAlternate()
-		case ruleAction27:
-			p.AddRange()
-		case ruleAction28:
-			p.AddDoubleRange()
-		case ruleAction29:
-			p.AddCharacter(text)
-		case ruleAction30:
-			p.AddDoubleCharacter(text)
-		case ruleAction31:
-			p.AddCharacter(text)
-		case ruleAction32:
-			p.AddCharacter("\a")
-		case ruleAction33:
-			p.AddCharacter("\b")
-		case ruleAction34:
-			p.AddCharacter("\x1B")
-		case ruleAction35:
-			p.AddCharacter("\f")
-		case ruleAction36:
-			p.AddCharacter("\n")
-		case ruleAction37:
-			p.AddCharacter("\r")
-		case ruleAction38:
-			p.AddCharacter("\t")
-		case ruleAction39:
-			p.AddCharacter("\v")
-		case ruleAction40:
-			p.AddCharacter("'")
-		case ruleAction41:
-			p.AddCharacter("\"")
-		case ruleAction42:
-			p.AddCharacter("[")
-		case ruleAction43:
-			p.AddCharacter("]")
-		case ruleAction44:
-			p.AddCharacter("-")
-		case ruleAction45:
-			p.AddHexaCharacter(text)
-		case ruleAction46:
-			p.AddOctalCharacter(text)
-		case ruleAction47:
-			p.AddOctalCharacter(text)
-		case ruleAction48:
-			p.AddCharacter("\\")
-
-		}
-	}
-	_, _, _, _, _ = buffer, _buffer, text, begin, end
-}
-
-func (p *Peg) Init() {
-	p.buffer = []rune(p.Buffer)
-	if len(p.buffer) == 0 || p.buffer[len(p.buffer)-1] != endSymbol {
-		p.buffer = append(p.buffer, endSymbol)
-	}
-
-	tree := tokens32{tree: make([]token32, math.MaxInt16)}
-	var max token32
-	position, depth, tokenIndex, buffer, _rules := uint32(0), uint32(0), 0, p.buffer, p.rules
-
-	p.Parse = func(rule ...int) error {
-		r := 1
-		if len(rule) > 0 {
-			r = rule[0]
-		}
-		matches := p.rules[r]()
-		p.tokens32 = tree
-		if matches {
-			p.trim(tokenIndex)
-			return nil
-		}
-		return &parseError{p, max}
-	}
-
-	p.Reset = func() {
-		position, tokenIndex, depth = 0, 0, 0
-	}
-
-	add := func(rule pegRule, begin uint32) {
-		tree.Expand(tokenIndex)
-		tree.Add(rule, begin, position, depth, tokenIndex)
-		tokenIndex++
-		if begin != position && position > max.end {
-			max = token32{rule, begin, position, depth}
-		}
-	}
-
-	matchDot := func() bool {
-		if buffer[position] != endSymbol {
-			position++
-			return true
-		}
-		return false
-	}
-
-	/*matchChar := func(c byte) bool {
-		if buffer[position] == c {
-			position++
-			return true
-		}
-		return false
-	}*/
-
-	/*matchRange := func(lower byte, upper byte) bool {
-		if c := buffer[position]; c >= lower && c <= upper {
-			position++
-			return true
-		}
-		return false
-	}*/
-
-	_rules = [...]func() bool{
-		nil,
-		/* 0 Grammar <- <(Spacing ('p' 'a' 'c' 'k' 'a' 'g' 'e') MustSpacing Identifier Action0 Import* ('t' 'y' 'p' 'e') MustSpacing Identifier Action1 ('P' 'e' 'g') Spacing Action Action2 Definition+ EndOfFile)> */
-		func() bool {
-			position0, tokenIndex0, depth0 := position, tokenIndex, depth
-			{
-				position1 := position
-				depth++
-				if !_rules[ruleSpacing]() {
-					goto l0
-				}
-				if buffer[position] != rune('p') {
-					goto l0
-				}
-				position++
-				if buffer[position] != rune('a') {
-					goto l0
-				}
-				position++
-				if buffer[position] != rune('c') {
-					goto l0
-				}
-				position++
-				if buffer[position] != rune('k') {
-					goto l0
-				}
-				position++
-				if buffer[position] != rune('a') {
-					goto l0
-				}
-				position++
-				if buffer[position] != rune('g') {
-					goto l0
-				}
-				position++
-				if buffer[position] != rune('e') {
-					goto l0
-				}
-				position++
-				if !_rules[ruleMustSpacing]() {
-					goto l0
-				}
-				if !_rules[ruleIdentifier]() {
-					goto l0
-				}
-				{
-					add(ruleAction0, position)
-				}
-			l3:
-				{
-					position4, tokenIndex4, depth4 := position, tokenIndex, depth
-					{
-						position5 := position
-						depth++
-						if buffer[position] != rune('i') {
-							goto l4
-						}
-						position++
-						if buffer[position] != rune('m') {
-							goto l4
-						}
-						position++
-						if buffer[position] != rune('p') {
-							goto l4
-						}
-						position++
-						if buffer[position] != rune('o') {
-							goto l4
-						}
-						position++
-						if buffer[position] != rune('r') {
-							goto l4
-						}
-						position++
-						if buffer[position] != rune('t') {
-							goto l4
-						}
-						position++
-						if !_rules[ruleSpacing]() {
-							goto l4
-						}
-						if buffer[position] != rune('"') {
-							goto l4
-						}
-						position++
-						{
-							position6 := position
-							depth++
-							{
-								switch buffer[position] {
-								case '-':
-									if buffer[position] != rune('-') {
-										goto l4
-									}
-									position++
-									break
-								case '.':
-									if buffer[position] != rune('.') {
-										goto l4
-									}
-									position++
-									break
-								case '/':
-									if buffer[position] != rune('/') {
-										goto l4
-									}
-									position++
-									break
-								case '_':
-									if buffer[position] != rune('_') {
-										goto l4
-									}
-									position++
-									break
-								case 'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N', 'O', 'P', 'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X', 'Y', 'Z':
-									if c := buffer[position]; c < rune('A') || c > rune('Z') {
-										goto l4
-									}
-									position++
-									break
-								default:
-									if c := buffer[position]; c < rune('a') || c > rune('z') {
-										goto l4
-									}
-									position++
-									break
-								}
-							}
-
-						l7:
-							{
-								position8, tokenIndex8, depth8 := position, tokenIndex, depth
-								{
-									switch buffer[position] {
-									case '-':
-										if buffer[position] != rune('-') {
-											goto l8
-										}
-										position++
-										break
-									case '.':
-										if buffer[position] != rune('.') {
-											goto l8
-										}
-										position++
-										break
-									case '/':
-										if buffer[position] != rune('/') {
-											goto l8
-										}
-										position++
-										break
-									case '_':
-										if buffer[position] != rune('_') {
-											goto l8
-										}
-										position++
-										break
-									case 'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N', 'O', 'P', 'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X', 'Y', 'Z':
-										if c := buffer[position]; c < rune('A') || c > rune('Z') {
-											goto l8
-										}
-										position++
-										break
-									default:
-										if c := buffer[position]; c < rune('a') || c > rune('z') {
-											goto l8
-										}
-										position++
-										break
-									}
-								}
-
-								goto l7
-							l8:
-								position, tokenIndex, depth = position8, tokenIndex8, depth8
-							}
-							depth--
-							add(rulePegText, position6)
-						}
-						if buffer[position] != rune('"') {
-							goto l4
-						}
-						position++
-						if !_rules[ruleSpacing]() {
-							goto l4
-						}
-						{
-							add(ruleAction3, position)
-						}
-						depth--
-						add(ruleImport, position5)
-					}
-					goto l3
-				l4:
-					position, tokenIndex, depth = position4, tokenIndex4, depth4
-				}
-				if buffer[position] != rune('t') {
-					goto l0
-				}
-				position++
-				if buffer[position] != rune('y') {
-					goto l0
-				}
-				position++
-				if buffer[position] != rune('p') {
-					goto l0
-				}
-				position++
-				if buffer[position] != rune('e') {
-					goto l0
-				}
-				position++
-				if !_rules[ruleMustSpacing]() {
-					goto l0
-				}
-				if !_rules[ruleIdentifier]() {
-					goto l0
-				}
-				{
-					add(ruleAction1, position)
-				}
-				if buffer[position] != rune('P') {
-					goto l0
-				}
-				position++
-				if buffer[position] != rune('e') {
-					goto l0
-				}
-				position++
-				if buffer[position] != rune('g') {
-					goto l0
-				}
-				position++
-				if !_rules[ruleSpacing]() {
-					goto l0
-				}
-				if !_rules[ruleAction]() {
-					goto l0
-				}
-				{
-					add(ruleAction2, position)
-				}
-				{
-					position16 := position
-					depth++
-					if !_rules[ruleIdentifier]() {
-						goto l0
-					}
-					{
-						add(ruleAction4, position)
-					}
-					if !_rules[ruleLeftArrow]() {
-						goto l0
-					}
-					if !_rules[ruleExpression]() {
-						goto l0
-					}
-					{
-						add(ruleAction5, position)
-					}
-					{
-						position19, tokenIndex19, depth19 := position, tokenIndex, depth
-						{
-							position20, tokenIndex20, depth20 := position, tokenIndex, depth
-							if !_rules[ruleIdentifier]() {
-								goto l21
-							}
-							if !_rules[ruleLeftArrow]() {
-								goto l21
-							}
-							goto l20
-						l21:
-							position, tokenIndex, depth = position20, tokenIndex20, depth20
-							{
-								position22, tokenIndex22, depth22 := position, tokenIndex, depth
-								if !matchDot() {
-									goto l22
-								}
-								goto l0
-							l22:
-								position, tokenIndex, depth = position22, tokenIndex22, depth22
-							}
-						}
-					l20:
-						position, tokenIndex, depth = position19, tokenIndex19, depth19
-					}
-					depth--
-					add(ruleDefinition, position16)
-				}
-			l14:
-				{
-					position15, tokenIndex15, depth15 := position, tokenIndex, depth
-					{
-						position23 := position
-						depth++
-						if !_rules[ruleIdentifier]() {
-							goto l15
-						}
-						{
-							add(ruleAction4, position)
-						}
-						if !_rules[ruleLeftArrow]() {
-							goto l15
-						}
-						if !_rules[ruleExpression]() {
-							goto l15
-						}
-						{
-							add(ruleAction5, position)
-						}
-						{
-							position26, tokenIndex26, depth26 := position, tokenIndex, depth
-							{
-								position27, tokenIndex27, depth27 := position, tokenIndex, depth
-								if !_rules[ruleIdentifier]() {
-									goto l28
-								}
-								if !_rules[ruleLeftArrow]() {
-									goto l28
-								}
-								goto l27
-							l28:
-								position, tokenIndex, depth = position27, tokenIndex27, depth27
-								{
-									position29, tokenIndex29, depth29 := position, tokenIndex, depth
-									if !matchDot() {
-										goto l29
-									}
-									goto l15
-								l29:
-									position, tokenIndex, depth = position29, tokenIndex29, depth29
-								}
-							}
-						l27:
-							position, tokenIndex, depth = position26, tokenIndex26, depth26
-						}
-						depth--
-						add(ruleDefinition, position23)
-					}
-					goto l14
-				l15:
-					position, tokenIndex, depth = position15, tokenIndex15, depth15
-				}
-				{
-					position30 := position
-					depth++
-					{
-						position31, tokenIndex31, depth31 := position, tokenIndex, depth
-						if !matchDot() {
-							goto l31
-						}
-						goto l0
-					l31:
-						position, tokenIndex, depth = position31, tokenIndex31, depth31
-					}
-					depth--
-					add(ruleEndOfFile, position30)
-				}
-				depth--
-				add(ruleGrammar, position1)
-			}
-			return true
-		l0:
-			position, tokenIndex, depth = position0, tokenIndex0, depth0
-			return false
-		},
-		/* 1 Import <- <('i' 'm' 'p' 'o' 'r' 't' Spacing '"' <((&('-') '-') | (&('.') '.') | (&('/') '/') | (&('_') '_') | (&('A' | 'B' | 'C' | 'D' | 'E' | 'F' | 'G' | 'H' | 'I' | 'J' | 'K' | 'L' | 'M' | 'N' | 'O' | 'P' | 'Q' | 'R' | 'S' | 'T' | 'U' | 'V' | 'W' | 'X' | 'Y' | 'Z') [A-Z]) | (&('a' | 'b' | 'c' | 'd' | 'e' | 'f' | 'g' | 'h' | 'i' | 'j' | 'k' | 'l' | 'm' | 'n' | 'o' | 'p' | 'q' | 'r' | 's' | 't' | 'u' | 'v' | 'w' | 'x' | 'y' | 'z') [a-z]))+> '"' Spacing Action3)> */
-		nil,
-		/* 2 Definition <- <(Identifier Action4 LeftArrow Expression Action5 &((Identifier LeftArrow) / !.))> */
-		nil,
-		/* 3 Expression <- <((Sequence (Slash Sequence Action6)* (Slash Action7)?) / Action8)> */
-		func() bool {
-			{
-				position35 := position
-				depth++
-				{
-					position36, tokenIndex36, depth36 := position, tokenIndex, depth
-					if !_rules[ruleSequence]() {
-						goto l37
-					}
-				l38:
-					{
-						position39, tokenIndex39, depth39 := position, tokenIndex, depth
-						if !_rules[ruleSlash]() {
-							goto l39
-						}
-						if !_rules[ruleSequence]() {
-							goto l39
-						}
-						{
-							add(ruleAction6, position)
-						}
-						goto l38
-					l39:
-						position, tokenIndex, depth = position39, tokenIndex39, depth39
-					}
-					{
-						position41, tokenIndex41, depth41 := position, tokenIndex, depth
-						if !_rules[ruleSlash]() {
-							goto l41
-						}
-						{
-							add(ruleAction7, position)
-						}
-						goto l42
-					l41:
-						position, tokenIndex, depth = position41, tokenIndex41, depth41
-					}
-				l42:
-					goto l36
-				l37:
-					position, tokenIndex, depth = position36, tokenIndex36, depth36
-					{
-						add(ruleAction8, position)
-					}
-				}
-			l36:
-				depth--
-				add(ruleExpression, position35)
-			}
-			return true
-		},
-		/* 4 Sequence <- <(Prefix (Prefix Action9)*)> */
-		func() bool {
-			position45, tokenIndex45, depth45 := position, tokenIndex, depth
-			{
-				position46 := position
-				depth++
-				if !_rules[rulePrefix]() {
-					goto l45
-				}
-			l47:
-				{
-					position48, tokenIndex48, depth48 := position, tokenIndex, depth
-					if !_rules[rulePrefix]() {
-						goto l48
-					}
-					{
-						add(ruleAction9, position)
-					}
-					goto l47
-				l48:
-					position, tokenIndex, depth = position48, tokenIndex48, depth48
-				}
-				depth--
-				add(ruleSequence, position46)
-			}
-			return true
-		l45:
-			position, tokenIndex, depth = position45, tokenIndex45, depth45
-			return false
-		},
-		/* 5 Prefix <- <((And Action Action10) / (Not Action Action11) / ((&('!') (Not Suffix Action13)) | (&('&') (And Suffix Action12)) | (&('"' | '\'' | '(' | '.' | '<' | 'A' | 'B' | 'C' | 'D' | 'E' | 'F' | 'G' | 'H' | 'I' | 'J' | 'K' | 'L' | 'M' | 'N' | 'O' | 'P' | 'Q' | 'R' | 'S' | 'T' | 'U' | 'V' | 'W' | 'X' | 'Y' | 'Z' | '[' | '_' | 'a' | 'b' | 'c' | 'd' | 'e' | 'f' | 'g' | 'h' | 'i' | 'j' | 'k' | 'l' | 'm' | 'n' | 'o' | 'p' | 'q' | 'r' | 's' | 't' | 'u' | 'v' | 'w' | 'x' | 'y' | 'z' | '{') Suffix)))> */
-		func() bool {
-			position50, tokenIndex50, depth50 := position, tokenIndex, depth
-			{
-				position51 := position
-				depth++
-				{
-					position52, tokenIndex52, depth52 := position, tokenIndex, depth
-					if !_rules[ruleAnd]() {
-						goto l53
-					}
-					if !_rules[ruleAction]() {
-						goto l53
-					}
-					{
-						add(ruleAction10, position)
-					}
-					goto l52
-				l53:
-					position, tokenIndex, depth = position52, tokenIndex52, depth52
-					if !_rules[ruleNot]() {
-						goto l55
-					}
-					if !_rules[ruleAction]() {
-						goto l55
-					}
-					{
-						add(ruleAction11, position)
-					}
-					goto l52
-				l55:
-					position, tokenIndex, depth = position52, tokenIndex52, depth52
-					{
-						switch buffer[position] {
-						case '!':
-							if !_rules[ruleNot]() {
-								goto l50
-							}
-							if !_rules[ruleSuffix]() {
-								goto l50
-							}
-							{
-								add(ruleAction13, position)
-							}
-							break
-						case '&':
-							if !_rules[ruleAnd]() {
-								goto l50
-							}
-							if !_rules[ruleSuffix]() {
-								goto l50
-							}
-							{
-								add(ruleAction12, position)
-							}
-							break
-						default:
-							if !_rules[ruleSuffix]() {
-								goto l50
-							}
-							break
-						}
-					}
-
-				}
-			l52:
-				depth--
-				add(rulePrefix, position51)
-			}
-			return true
-		l50:
-			position, tokenIndex, depth = position50, tokenIndex50, depth50
-			return false
-		},
-		/* 6 Suffix <- <(Primary ((&('+') (Plus Action16)) | (&('*') (Star Action15)) | (&('?') (Question Action14)))?)> */
-		func() bool {
-			position60, tokenIndex60, depth60 := position, tokenIndex, depth
-			{
-				position61 := position
-				depth++
-				{
-					position62 := position
-					depth++
-					{
-						switch buffer[position] {
-						case '<':
-							{
-								position64 := position
-								depth++
-								if buffer[position] != rune('<') {
-									goto l60
-								}
-								position++
-								if !_rules[ruleSpacing]() {
-									goto l60
-								}
-								depth--
-								add(ruleBegin, position64)
-							}
-							if !_rules[ruleExpression]() {
-								goto l60
-							}
-							{
-								position65 := position
-								depth++
-								if buffer[position] != rune('>') {
-									goto l60
-								}
-								position++
-								if !_rules[ruleSpacing]() {
-									goto l60
-								}
-								depth--
-								add(ruleEnd, position65)
-							}
-							{
-								add(ruleAction20, position)
-							}
-							break
-						case '{':
-							if !_rules[ruleAction]() {
-								goto l60
-							}
-							{
-								add(ruleAction19, position)
-							}
-							break
-						case '.':
-							{
-								position68 := position
-								depth++
-								if buffer[position] != rune('.') {
-									goto l60
-								}
-								position++
-								if !_rules[ruleSpacing]() {
-									goto l60
-								}
-								depth--
-								add(ruleDot, position68)
-							}
-							{
-								add(ruleAction18, position)
-							}
-							break
-						case '[':
-							{
-								position70 := position
-								depth++
-								{
-									position71, tokenIndex71, depth71 := position, tokenIndex, depth
-									if buffer[position] != rune('[') {
-										goto l72
-									}
-									position++
-									if buffer[position] != rune('[') {
-										goto l72
-									}
-									position++
-									{
-										position73, tokenIndex73, depth73 := position, tokenIndex, depth
-										{
-											position75, tokenIndex75, depth75 := position, tokenIndex, depth
-											if buffer[position] != rune('^') {
-												goto l76
-											}
-											position++
-											if !_rules[ruleDoubleRanges]() {
-												goto l76
-											}
-											{
-												add(ruleAction23, position)
-											}
-											goto l75
-										l76:
-											position, tokenIndex, depth = position75, tokenIndex75, depth75
-											if !_rules[ruleDoubleRanges]() {
-												goto l73
-											}
-										}
-									l75:
-										goto l74
-									l73:
-										position, tokenIndex, depth = position73, tokenIndex73, depth73
-									}
-								l74:
-									if buffer[position] != rune(']') {
-										goto l72
-									}
-									position++
-									if buffer[position] != rune(']') {
-										goto l72
-									}
-									position++
-									goto l71
-								l72:
-									position, tokenIndex, depth = position71, tokenIndex71, depth71
-									if buffer[position] != rune('[') {
-										goto l60
-									}
-									position++
-									{
-										position78, tokenIndex78, depth78 := position, tokenIndex, depth
-										{
-											position80, tokenIndex80, depth80 := position, tokenIndex, depth
-											if buffer[position] != rune('^') {
-												goto l81
-											}
-											position++
-											if !_rules[ruleRanges]() {
-												goto l81
-											}
-											{
-												add(ruleAction24, position)
-											}
-											goto l80
-										l81:
-											position, tokenIndex, depth = position80, tokenIndex80, depth80
-											if !_rules[ruleRanges]() {
-												goto l78
-											}
-										}
-									l80:
-										goto l79
-									l78:
-										position, tokenIndex, depth = position78, tokenIndex78, depth78
-									}
-								l79:
-									if buffer[position] != rune(']') {
-										goto l60
-									}
-									position++
-								}
-							l71:
-								if !_rules[ruleSpacing]() {
-									goto l60
-								}
-								depth--
-								add(ruleClass, position70)
-							}
-							break
-						case '"', '\'':
-							{
-								position83 := position
-								depth++
-								{
-									position84, tokenIndex84, depth84 := position, tokenIndex, depth
-									if buffer[position] != rune('\'') {
-										goto l85
-									}
-									position++
-									{
-										position86, tokenIndex86, depth86 := position, tokenIndex, depth
-										{
-											position88, tokenIndex88, depth88 := position, tokenIndex, depth
-											if buffer[position] != rune('\'') {
-												goto l88
-											}
-											position++
-											goto l86
-										l88:
-											position, tokenIndex, depth = position88, tokenIndex88, depth88
-										}
-										if !_rules[ruleChar]() {
-											goto l86
-										}
-										goto l87
-									l86:
-										position, tokenIndex, depth = position86, tokenIndex86, depth86
-									}
-								l87:
-								l89:
-									{
-										position90, tokenIndex90, depth90 := position, tokenIndex, depth
-										{
-											position91, tokenIndex91, depth91 := position, tokenIndex, depth
-											if buffer[position] != rune('\'') {
-												goto l91
-											}
-											position++
-											goto l90
-										l91:
-											position, tokenIndex, depth = position91, tokenIndex91, depth91
-										}
-										if !_rules[ruleChar]() {
-											goto l90
-										}
-										{
-											add(ruleAction21, position)
-										}
-										goto l89
-									l90:
-										position, tokenIndex, depth = position90, tokenIndex90, depth90
-									}
-									if buffer[position] != rune('\'') {
-										goto l85
-									}
-									position++
-									if !_rules[ruleSpacing]() {
-										goto l85
-									}
-									goto l84
-								l85:
-									position, tokenIndex, depth = position84, tokenIndex84, depth84
-									if buffer[position] != rune('"') {
-										goto l60
-									}
-									position++
-									{
-										position93, tokenIndex93, depth93 := position, tokenIndex, depth
-										{
-											position95, tokenIndex95, depth95 := position, tokenIndex, depth
-											if buffer[position] != rune('"') {
-												goto l95
-											}
-											position++
-											goto l93
-										l95:
-											position, tokenIndex, depth = position95, tokenIndex95, depth95
-										}
-										if !_rules[ruleDoubleChar]() {
-											goto l93
-										}
-										goto l94
-									l93:
-										position, tokenIndex, depth = position93, tokenIndex93, depth93
-									}
-								l94:
-								l96:
-									{
-										position97, tokenIndex97, depth97 := position, tokenIndex, depth
-										{
-											position98, tokenIndex98, depth98 := position, tokenIndex, depth
-											if buffer[position] != rune('"') {
-												goto l98
-											}
-											position++
-											goto l97
-										l98:
-											position, tokenIndex, depth = position98, tokenIndex98, depth98
-										}
-										if !_rules[ruleDoubleChar]() {
-											goto l97
-										}
-										{
-											add(ruleAction22, position)
-										}
-										goto l96
-									l97:
-										position, tokenIndex, depth = position97, tokenIndex97, depth97
-									}
-									if buffer[position] != rune('"') {
-										goto l60
-									}
-									position++
-									if !_rules[ruleSpacing]() {
-										goto l60
-									}
-								}
-							l84:
-								depth--
-								add(ruleLiteral, position83)
-							}
-							break
-						case '(':
-							{
-								position100 := position
-								depth++
-								if buffer[position] != rune('(') {
-									goto l60
-								}
-								position++
-								if !_rules[ruleSpacing]() {
-									goto l60
-								}
-								depth--
-								add(ruleOpen, position100)
-							}
-							if !_rules[ruleExpression]() {
-								goto l60
-							}
-							{
-								position101 := position
-								depth++
-								if buffer[position] != rune(')') {
-									goto l60
-								}
-								position++
-								if !_rules[ruleSpacing]() {
-									goto l60
-								}
-								depth--
-								add(ruleClose, position101)
-							}
-							break
-						default:
-							if !_rules[ruleIdentifier]() {
-								goto l60
-							}
-							{
-								position102, tokenIndex102, depth102 := position, tokenIndex, depth
-								if !_rules[ruleLeftArrow]() {
-									goto l102
-								}
-								goto l60
-							l102:
-								position, tokenIndex, depth = position102, tokenIndex102, depth102
-							}
-							{
-								add(ruleAction17, position)
-							}
-							break
-						}
-					}
-
-					depth--
-					add(rulePrimary, position62)
-				}
-				{
-					position104, tokenIndex104, depth104 := position, tokenIndex, depth
-					{
-						switch buffer[position] {
-						case '+':
-							{
-								position107 := position
-								depth++
-								if buffer[position] != rune('+') {
-									goto l104
-								}
-								position++
-								if !_rules[ruleSpacing]() {
-									goto l104
-								}
-								depth--
-								add(rulePlus, position107)
-							}
-							{
-								add(ruleAction16, position)
-							}
-							break
-						case '*':
-							{
-								position109 := position
-								depth++
-								if buffer[position] != rune('*') {
-									goto l104
-								}
-								position++
-								if !_rules[ruleSpacing]() {
-									goto l104
-								}
-								depth--
-								add(ruleStar, position109)
-							}
-							{
-								add(ruleAction15, position)
-							}
-							break
-						default:
-							{
-								position111 := position
-								depth++
-								if buffer[position] != rune('?') {
-									goto l104
-								}
-								position++
-								if !_rules[ruleSpacing]() {
-									goto l104
-								}
-								depth--
-								add(ruleQuestion, position111)
-							}
-							{
-								add(ruleAction14, position)
-							}
-							break
-						}
-					}
-
-					goto l105
-				l104:
-					position, tokenIndex, depth = position104, tokenIndex104, depth104
-				}
-			l105:
-				depth--
-				add(ruleSuffix, position61)
-			}
-			return true
-		l60:
-			position, tokenIndex, depth = position60, tokenIndex60, depth60
-			return false
-		},
-		/* 7 Primary <- <((&('<') (Begin Expression End Action20)) | (&('{') (Action Action19)) | (&('.') (Dot Action18)) | (&('[') Class) | (&('"' | '\'') Literal) | (&('(') (Open Expression Close)) | (&('A' | 'B' | 'C' | 'D' | 'E' | 'F' | 'G' | 'H' | 'I' | 'J' | 'K' | 'L' | 'M' | 'N' | 'O' | 'P' | 'Q' | 'R' | 'S' | 'T' | 'U' | 'V' | 'W' | 'X' | 'Y' | 'Z' | '_' | 'a' | 'b' | 'c' | 'd' | 'e' | 'f' | 'g' | 'h' | 'i' | 'j' | 'k' | 'l' | 'm' | 'n' | 'o' | 'p' | 'q' | 'r' | 's' | 't' | 'u' | 'v' | 'w' | 'x' | 'y' | 'z') (Identifier !LeftArrow Action17)))> */
-		nil,
-		/* 8 Identifier <- <(<(IdentStart IdentCont*)> Spacing)> */
-		func() bool {
-			position114, tokenIndex114, depth114 := position, tokenIndex, depth
-			{
-				position115 := position
-				depth++
-				{
-					position116 := position
-					depth++
-					if !_rules[ruleIdentStart]() {
-						goto l114
-					}
-				l117:
-					{
-						position118, tokenIndex118, depth118 := position, tokenIndex, depth
-						{
-							position119 := position
-							depth++
-							{
-								position120, tokenIndex120, depth120 := position, tokenIndex, depth
-								if !_rules[ruleIdentStart]() {
-									goto l121
-								}
-								goto l120
-							l121:
-								position, tokenIndex, depth = position120, tokenIndex120, depth120
-								if c := buffer[position]; c < rune('0') || c > rune('9') {
-									goto l118
-								}
-								position++
-							}
-						l120:
-							depth--
-							add(ruleIdentCont, position119)
-						}
-						goto l117
-					l118:
-						position, tokenIndex, depth = position118, tokenIndex118, depth118
-					}
-					depth--
-					add(rulePegText, position116)
-				}
-				if !_rules[ruleSpacing]() {
-					goto l114
-				}
-				depth--
-				add(ruleIdentifier, position115)
-			}
-			return true
-		l114:
-			position, tokenIndex, depth = position114, tokenIndex114, depth114
-			return false
-		},
-		/* 9 IdentStart <- <((&('_') '_') | (&('A' | 'B' | 'C' | 'D' | 'E' | 'F' | 'G' | 'H' | 'I' | 'J' | 'K' | 'L' | 'M' | 'N' | 'O' | 'P' | 'Q' | 'R' | 'S' | 'T' | 'U' | 'V' | 'W' | 'X' | 'Y' | 'Z') [A-Z]) | (&('a' | 'b' | 'c' | 'd' | 'e' | 'f' | 'g' | 'h' | 'i' | 'j' | 'k' | 'l' | 'm' | 'n' | 'o' | 'p' | 'q' | 'r' | 's' | 't' | 'u' | 'v' | 'w' | 'x' | 'y' | 'z') [a-z]))> */
-		func() bool {
-			position122, tokenIndex122, depth122 := position, tokenIndex, depth
-			{
-				position123 := position
-				depth++
-				{
-					switch buffer[position] {
-					case '_':
-						if buffer[position] != rune('_') {
-							goto l122
-						}
-						position++
-						break
-					case 'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N', 'O', 'P', 'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X', 'Y', 'Z':
-						if c := buffer[position]; c < rune('A') || c > rune('Z') {
-							goto l122
-						}
-						position++
-						break
-					default:
-						if c := buffer[position]; c < rune('a') || c > rune('z') {
-							goto l122
-						}
-						position++
-						break
-					}
-				}
-
-				depth--
-				add(ruleIdentStart, position123)
-			}
-			return true
-		l122:
-			position, tokenIndex, depth = position122, tokenIndex122, depth122
-			return false
-		},
-		/* 10 IdentCont <- <(IdentStart / [0-9])> */
-		nil,
-		/* 11 Literal <- <(('\'' (!'\'' Char)? (!'\'' Char Action21)* '\'' Spacing) / ('"' (!'"' DoubleChar)? (!'"' DoubleChar Action22)* '"' Spacing))> */
-		nil,
-		/* 12 Class <- <((('[' '[' (('^' DoubleRanges Action23) / DoubleRanges)? (']' ']')) / ('[' (('^' Ranges Action24) / Ranges)? ']')) Spacing)> */
-		nil,
-		/* 13 Ranges <- <(!']' Range (!']' Range Action25)*)> */
-		func() bool {
-			position128, tokenIndex128, depth128 := position, tokenIndex, depth
-			{
-				position129 := position
-				depth++
-				{
-					position130, tokenIndex130, depth130 := position, tokenIndex, depth
-					if buffer[position] != rune(']') {
-						goto l130
-					}
-					position++
-					goto l128
-				l130:
-					position, tokenIndex, depth = position130, tokenIndex130, depth130
-				}
-				if !_rules[ruleRange]() {
-					goto l128
-				}
-			l131:
-				{
-					position132, tokenIndex132, depth132 := position, tokenIndex, depth
-					{
-						position133, tokenIndex133, depth133 := position, tokenIndex, depth
-						if buffer[position] != rune(']') {
-							goto l133
-						}
-						position++
-						goto l132
-					l133:
-						position, tokenIndex, depth = position133, tokenIndex133, depth133
-					}
-					if !_rules[ruleRange]() {
-						goto l132
-					}
-					{
-						add(ruleAction25, position)
-					}
-					goto l131
-				l132:
-					position, tokenIndex, depth = position132, tokenIndex132, depth132
-				}
-				depth--
-				add(ruleRanges, position129)
-			}
-			return true
-		l128:
-			position, tokenIndex, depth = position128, tokenIndex128, depth128
-			return false
-		},
-		/* 14 DoubleRanges <- <(!(']' ']') DoubleRange (!(']' ']') DoubleRange Action26)*)> */
-		func() bool {
-			position135, tokenIndex135, depth135 := position, tokenIndex, depth
-			{
-				position136 := position
-				depth++
-				{
-					position137, tokenIndex137, depth137 := position, tokenIndex, depth
-					if buffer[position] != rune(']') {
-						goto l137
-					}
-					position++
-					if buffer[position] != rune(']') {
-						goto l137
-					}
-					position++
-					goto l135
-				l137:
-					position, tokenIndex, depth = position137, tokenIndex137, depth137
-				}
-				if !_rules[ruleDoubleRange]() {
-					goto l135
-				}
-			l138:
-				{
-					position139, tokenIndex139, depth139 := position, tokenIndex, depth
-					{
-						position140, tokenIndex140, depth140 := position, tokenIndex, depth
-						if buffer[position] != rune(']') {
-							goto l140
-						}
-						position++
-						if buffer[position] != rune(']') {
-							goto l140
-						}
-						position++
-						goto l139
-					l140:
-						position, tokenIndex, depth = position140, tokenIndex140, depth140
-					}
-					if !_rules[ruleDoubleRange]() {
-						goto l139
-					}
-					{
-						add(ruleAction26, position)
-					}
-					goto l138
-				l139:
-					position, tokenIndex, depth = position139, tokenIndex139, depth139
-				}
-				depth--
-				add(ruleDoubleRanges, position136)
-			}
-			return true
-		l135:
-			position, tokenIndex, depth = position135, tokenIndex135, depth135
-			return false
-		},
-		/* 15 Range <- <((Char '-' Char Action27) / Char)> */
-		func() bool {
-			position142, tokenIndex142, depth142 := position, tokenIndex, depth
-			{
-				position143 := position
-				depth++
-				{
-					position144, tokenIndex144, depth144 := position, tokenIndex, depth
-					if !_rules[ruleChar]() {
-						goto l145
-					}
-					if buffer[position] != rune('-') {
-						goto l145
-					}
-					position++
-					if !_rules[ruleChar]() {
-						goto l145
-					}
-					{
-						add(ruleAction27, position)
-					}
-					goto l144
-				l145:
-					position, tokenIndex, depth = position144, tokenIndex144, depth144
-					if !_rules[ruleChar]() {
-						goto l142
-					}
-				}
-			l144:
-				depth--
-				add(ruleRange, position143)
-			}
-			return true
-		l142:
-			position, tokenIndex, depth = position142, tokenIndex142, depth142
-			return false
-		},
-		/* 16 DoubleRange <- <((Char '-' Char Action28) / DoubleChar)> */
-		func() bool {
-			position147, tokenIndex147, depth147 := position, tokenIndex, depth
-			{
-				position148 := position
-				depth++
-				{
-					position149, tokenIndex149, depth149 := position, tokenIndex, depth
-					if !_rules[ruleChar]() {
-						goto l150
-					}
-					if buffer[position] != rune('-') {
-						goto l150
-					}
-					position++
-					if !_rules[ruleChar]() {
-						goto l150
-					}
-					{
-						add(ruleAction28, position)
-					}
-					goto l149
-				l150:
-					position, tokenIndex, depth = position149, tokenIndex149, depth149
-					if !_rules[ruleDoubleChar]() {
-						goto l147
-					}
-				}
-			l149:
-				depth--
-				add(ruleDoubleRange, position148)
-			}
-			return true
-		l147:
-			position, tokenIndex, depth = position147, tokenIndex147, depth147
-			return false
-		},
-		/* 17 Char <- <(Escape / (!'\\' <.> Action29))> */
-		func() bool {
-			position152, tokenIndex152, depth152 := position, tokenIndex, depth
-			{
-				position153 := position
-				depth++
-				{
-					position154, tokenIndex154, depth154 := position, tokenIndex, depth
-					if !_rules[ruleEscape]() {
-						goto l155
-					}
-					goto l154
-				l155:
-					position, tokenIndex, depth = position154, tokenIndex154, depth154
-					{
-						position156, tokenIndex156, depth156 := position, tokenIndex, depth
-						if buffer[position] != rune('\\') {
-							goto l156
-						}
-						position++
-						goto l152
-					l156:
-						position, tokenIndex, depth = position156, tokenIndex156, depth156
-					}
-					{
-						position157 := position
-						depth++
-						if !matchDot() {
-							goto l152
-						}
-						depth--
-						add(rulePegText, position157)
-					}
-					{
-						add(ruleAction29, position)
-					}
-				}
-			l154:
-				depth--
-				add(ruleChar, position153)
-			}
-			return true
-		l152:
-			position, tokenIndex, depth = position152, tokenIndex152, depth152
-			return false
-		},
-		/* 18 DoubleChar <- <(Escape / (<([a-z] / [A-Z])> Action30) / (!'\\' <.> Action31))> */
-		func() bool {
-			position159, tokenIndex159, depth159 := position, tokenIndex, depth
-			{
-				position160 := position
-				depth++
-				{
-					position161, tokenIndex161, depth161 := position, tokenIndex, depth
-					if !_rules[ruleEscape]() {
-						goto l162
-					}
-					goto l161
-				l162:
-					position, tokenIndex, depth = position161, tokenIndex161, depth161
-					{
-						position164 := position
-						depth++
-						{
-							position165, tokenIndex165, depth165 := position, tokenIndex, depth
-							if c := buffer[position]; c < rune('a') || c > rune('z') {
-								goto l166
-							}
-							position++
-							goto l165
-						l166:
-							position, tokenIndex, depth = position165, tokenIndex165, depth165
-							if c := buffer[position]; c < rune('A') || c > rune('Z') {
-								goto l163
-							}
-							position++
-						}
-					l165:
-						depth--
-						add(rulePegText, position164)
-					}
-					{
-						add(ruleAction30, position)
-					}
-					goto l161
-				l163:
-					position, tokenIndex, depth = position161, tokenIndex161, depth161
-					{
-						position168, tokenIndex168, depth168 := position, tokenIndex, depth
-						if buffer[position] != rune('\\') {
-							goto l168
-						}
-						position++
-						goto l159
-					l168:
-						position, tokenIndex, depth = position168, tokenIndex168, depth168
-					}
-					{
-						position169 := position
-						depth++
-						if !matchDot() {
-							goto l159
-						}
-						depth--
-						add(rulePegText, position169)
-					}
-					{
-						add(ruleAction31, position)
-					}
-				}
-			l161:
-				depth--
-				add(ruleDoubleChar, position160)
-			}
-			return true
-		l159:
-			position, tokenIndex, depth = position159, tokenIndex159, depth159
-			return false
-		},
-		/* 19 Escape <- <(('\\' ('a' / 'A') Action32) / ('\\' ('b' / 'B') Action33) / ('\\' ('e' / 'E') Action34) / ('\\' ('f' / 'F') Action35) / ('\\' ('n' / 'N') Action36) / ('\\' ('r' / 'R') Action37) / ('\\' ('t' / 'T') Action38) / ('\\' ('v' / 'V') Action39) / ('\\' '\'' Action40) / ('\\' '"' Action41) / ('\\' '[' Action42) / ('\\' ']' Action43) / ('\\' '-' Action44) / ('\\' ('0' ('x' / 'X')) <((&('A' | 'B' | 'C' | 'D' | 'E' | 'F') [A-F]) | (&('a' | 'b' | 'c' | 'd' | 'e' | 'f') [a-f]) | (&('0' | '1' | '2' | '3' | '4' | '5' | '6' | '7' | '8' | '9') [0-9]))+> Action45) / ('\\' <([0-3] [0-7] [0-7])> Action46) / ('\\' <([0-7] [0-7]?)> Action47) / ('\\' '\\' Action48))> */
-		func() bool {
-			position171, tokenIndex171, depth171 := position, tokenIndex, depth
-			{
-				position172 := position
-				depth++
-				{
-					position173, tokenIndex173, depth173 := position, tokenIndex, depth
-					if buffer[position] != rune('\\') {
-						goto l174
-					}
-					position++
-					{
-						position175, tokenIndex175, depth175 := position, tokenIndex, depth
-						if buffer[position] != rune('a') {
-							goto l176
-						}
-						position++
-						goto l175
-					l176:
-						position, tokenIndex, depth = position175, tokenIndex175, depth175
-						if buffer[position] != rune('A') {
-							goto l174
-						}
-						position++
-					}
-				l175:
-					{
-						add(ruleAction32, position)
-					}
-					goto l173
-				l174:
-					position, tokenIndex, depth = position173, tokenIndex173, depth173
-					if buffer[position] != rune('\\') {
-						goto l178
-					}
-					position++
-					{
-						position179, tokenIndex179, depth179 := position, tokenIndex, depth
-						if buffer[position] != rune('b') {
-							goto l180
-						}
-						position++
-						goto l179
-					l180:
-						position, tokenIndex, depth = position179, tokenIndex179, depth179
-						if buffer[position] != rune('B') {
-							goto l178
-						}
-						position++
-					}
-				l179:
-					{
-						add(ruleAction33, position)
-					}
-					goto l173
-				l178:
-					position, tokenIndex, depth = position173, tokenIndex173, depth173
-					if buffer[position] != rune('\\') {
-						goto l182
-					}
-					position++
-					{
-						position183, tokenIndex183, depth183 := position, tokenIndex, depth
-						if buffer[position] != rune('e') {
-							goto l184
-						}
-						position++
-						goto l183
-					l184:
-						position, tokenIndex, depth = position183, tokenIndex183, depth183
-						if buffer[position] != rune('E') {
-							goto l182
-						}
-						position++
-					}
-				l183:
-					{
-						add(ruleAction34, position)
-					}
-					goto l173
-				l182:
-					position, tokenIndex, depth = position173, tokenIndex173, depth173
-					if buffer[position] != rune('\\') {
-						goto l186
-					}
-					position++
-					{
-						position187, tokenIndex187, depth187 := position, tokenIndex, depth
-						if buffer[position] != rune('f') {
-							goto l188
-						}
-						position++
-						goto l187
-					l188:
-						position, tokenIndex, depth = position187, tokenIndex187, depth187
-						if buffer[position] != rune('F') {
-							goto l186
-						}
-						position++
-					}
-				l187:
-					{
-						add(ruleAction35, position)
-					}
-					goto l173
-				l186:
-					position, tokenIndex, depth = position173, tokenIndex173, depth173
-					if buffer[position] != rune('\\') {
-						goto l190
-					}
-					position++
-					{
-						position191, tokenIndex191, depth191 := position, tokenIndex, depth
-						if buffer[position] != rune('n') {
-							goto l192
-						}
-						position++
-						goto l191
-					l192:
-						position, tokenIndex, depth = position191, tokenIndex191, depth191
-						if buffer[position] != rune('N') {
-							goto l190
-						}
-						position++
-					}
-				l191:
-					{
-						add(ruleAction36, position)
-					}
-					goto l173
-				l190:
-					position, tokenIndex, depth = position173, tokenIndex173, depth173
-					if buffer[position] != rune('\\') {
-						goto l194
-					}
-					position++
-					{
-						position195, tokenIndex195, depth195 := position, tokenIndex, depth
-						if buffer[position] != rune('r') {
-							goto l196
-						}
-						position++
-						goto l195
-					l196:
-						position, tokenIndex, depth = position195, tokenIndex195, depth195
-						if buffer[position] != rune('R') {
-							goto l194
-						}
-						position++
-					}
-				l195:
-					{
-						add(ruleAction37, position)
-					}
-					goto l173
-				l194:
-					position, tokenIndex, depth = position173, tokenIndex173, depth173
-					if buffer[position] != rune('\\') {
-						goto l198
-					}
-					position++
-					{
-						position199, tokenIndex199, depth199 := position, tokenIndex, depth
-						if buffer[position] != rune('t') {
-							goto l200
-						}
-						position++
-						goto l199
-					l200:
-						position, tokenIndex, depth = position199, tokenIndex199, depth199
-						if buffer[position] != rune('T') {
-							goto l198
-						}
-						position++
-					}
-				l199:
-					{
-						add(ruleAction38, position)
-					}
-					goto l173
-				l198:
-					position, tokenIndex, depth = position173, tokenIndex173, depth173
-					if buffer[position] != rune('\\') {
-						goto l202
-					}
-					position++
-					{
-						position203, tokenIndex203, depth203 := position, tokenIndex, depth
-						if buffer[position] != rune('v') {
-							goto l204
-						}
-						position++
-						goto l203
-					l204:
-						position, tokenIndex, depth = position203, tokenIndex203, depth203
-						if buffer[position] != rune('V') {
-							goto l202
-						}
-						position++
-					}
-				l203:
-					{
-						add(ruleAction39, position)
-					}
-					goto l173
-				l202:
-					position, tokenIndex, depth = position173, tokenIndex173, depth173
-					if buffer[position] != rune('\\') {
-						goto l206
-					}
-					position++
-					if buffer[position] != rune('\'') {
-						goto l206
-					}
-					position++
-					{
-						add(ruleAction40, position)
-					}
-					goto l173
-				l206:
-					position, tokenIndex, depth = position173, tokenIndex173, depth173
-					if buffer[position] != rune('\\') {
-						goto l208
-					}
-					position++
-					if buffer[position] != rune('"') {
-						goto l208
-					}
-					position++
-					{
-						add(ruleAction41, position)
-					}
-					goto l173
-				l208:
-					position, tokenIndex, depth = position173, tokenIndex173, depth173
-					if buffer[position] != rune('\\') {
-						goto l210
-					}
-					position++
-					if buffer[position] != rune('[') {
-						goto l210
-					}
-					position++
-					{
-						add(ruleAction42, position)
-					}
-					goto l173
-				l210:
-					position, tokenIndex, depth = position173, tokenIndex173, depth173
-					if buffer[position] != rune('\\') {
-						goto l212
-					}
-					position++
-					if buffer[position] != rune(']') {
-						goto l212
-					}
-					position++
-					{
-						add(ruleAction43, position)
-					}
-					goto l173
-				l212:
-					position, tokenIndex, depth = position173, tokenIndex173, depth173
-					if buffer[position] != rune('\\') {
-						goto l214
-					}
-					position++
-					if buffer[position] != rune('-') {
-						goto l214
-					}
-					position++
-					{
-						add(ruleAction44, position)
-					}
-					goto l173
-				l214:
-					position, tokenIndex, depth = position173, tokenIndex173, depth173
-					if buffer[position] != rune('\\') {
-						goto l216
-					}
-					position++
-					if buffer[position] != rune('0') {
-						goto l216
-					}
-					position++
-					{
-						position217, tokenIndex217, depth217 := position, tokenIndex, depth
-						if buffer[position] != rune('x') {
-							goto l218
-						}
-						position++
-						goto l217
-					l218:
-						position, tokenIndex, depth = position217, tokenIndex217, depth217
-						if buffer[position] != rune('X') {
-							goto l216
-						}
-						position++
-					}
-				l217:
-					{
-						position219 := position
-						depth++
-						{
-							switch buffer[position] {
-							case 'A', 'B', 'C', 'D', 'E', 'F':
-								if c := buffer[position]; c < rune('A') || c > rune('F') {
-									goto l216
-								}
-								position++
-								break
-							case 'a', 'b', 'c', 'd', 'e', 'f':
-								if c := buffer[position]; c < rune('a') || c > rune('f') {
-									goto l216
-								}
-								position++
-								break
-							default:
-								if c := buffer[position]; c < rune('0') || c > rune('9') {
-									goto l216
-								}
-								position++
-								break
-							}
-						}
-
-					l220:
-						{
-							position221, tokenIndex221, depth221 := position, tokenIndex, depth
-							{
-								switch buffer[position] {
-								case 'A', 'B', 'C', 'D', 'E', 'F':
-									if c := buffer[position]; c < rune('A') || c > rune('F') {
-										goto l221
-									}
-									position++
-									break
-								case 'a', 'b', 'c', 'd', 'e', 'f':
-									if c := buffer[position]; c < rune('a') || c > rune('f') {
-										goto l221
-									}
-									position++
-									break
-								default:
-									if c := buffer[position]; c < rune('0') || c > rune('9') {
-										goto l221
-									}
-									position++
-									break
-								}
-							}
-
-							goto l220
-						l221:
-							position, tokenIndex, depth = position221, tokenIndex221, depth221
-						}
-						depth--
-						add(rulePegText, position219)
-					}
-					{
-						add(ruleAction45, position)
-					}
-					goto l173
-				l216:
-					position, tokenIndex, depth = position173, tokenIndex173, depth173
-					if buffer[position] != rune('\\') {
-						goto l225
-					}
-					position++
-					{
-						position226 := position
-						depth++
-						if c := buffer[position]; c < rune('0') || c > rune('3') {
-							goto l225
-						}
-						position++
-						if c := buffer[position]; c < rune('0') || c > rune('7') {
-							goto l225
-						}
-						position++
-						if c := buffer[position]; c < rune('0') || c > rune('7') {
-							goto l225
-						}
-						position++
-						depth--
-						add(rulePegText, position226)
-					}
-					{
-						add(ruleAction46, position)
-					}
-					goto l173
-				l225:
-					position, tokenIndex, depth = position173, tokenIndex173, depth173
-					if buffer[position] != rune('\\') {
-						goto l228
-					}
-					position++
-					{
-						position229 := position
-						depth++
-						if c := buffer[position]; c < rune('0') || c > rune('7') {
-							goto l228
-						}
-						position++
-						{
-							position230, tokenIndex230, depth230 := position, tokenIndex, depth
-							if c := buffer[position]; c < rune('0') || c > rune('7') {
-								goto l230
-							}
-							position++
-							goto l231
-						l230:
-							position, tokenIndex, depth = position230, tokenIndex230, depth230
-						}
-					l231:
-						depth--
-						add(rulePegText, position229)
-					}
-					{
-						add(ruleAction47, position)
-					}
-					goto l173
-				l228:
-					position, tokenIndex, depth = position173, tokenIndex173, depth173
-					if buffer[position] != rune('\\') {
-						goto l171
-					}
-					position++
-					if buffer[position] != rune('\\') {
-						goto l171
-					}
-					position++
-					{
-						add(ruleAction48, position)
-					}
-				}
-			l173:
-				depth--
-				add(ruleEscape, position172)
-			}
-			return true
-		l171:
-			position, tokenIndex, depth = position171, tokenIndex171, depth171
-			return false
-		},
-		/* 20 LeftArrow <- <((('<' '-') / '←') Spacing)> */
-		func() bool {
-			position234, tokenIndex234, depth234 := position, tokenIndex, depth
-			{
-				position235 := position
-				depth++
-				{
-					position236, tokenIndex236, depth236 := position, tokenIndex, depth
-					if buffer[position] != rune('<') {
-						goto l237
-					}
-					position++
-					if buffer[position] != rune('-') {
-						goto l237
-					}
-					position++
-					goto l236
-				l237:
-					position, tokenIndex, depth = position236, tokenIndex236, depth236
-					if buffer[position] != rune('←') {
-						goto l234
-					}
-					position++
-				}
-			l236:
-				if !_rules[ruleSpacing]() {
-					goto l234
-				}
-				depth--
-				add(ruleLeftArrow, position235)
-			}
-			return true
-		l234:
-			position, tokenIndex, depth = position234, tokenIndex234, depth234
-			return false
-		},
-		/* 21 Slash <- <('/' Spacing)> */
-		func() bool {
-			position238, tokenIndex238, depth238 := position, tokenIndex, depth
-			{
-				position239 := position
-				depth++
-				if buffer[position] != rune('/') {
-					goto l238
-				}
-				position++
-				if !_rules[ruleSpacing]() {
-					goto l238
-				}
-				depth--
-				add(ruleSlash, position239)
-			}
-			return true
-		l238:
-			position, tokenIndex, depth = position238, tokenIndex238, depth238
-			return false
-		},
-		/* 22 And <- <('&' Spacing)> */
-		func() bool {
-			position240, tokenIndex240, depth240 := position, tokenIndex, depth
-			{
-				position241 := position
-				depth++
-				if buffer[position] != rune('&') {
-					goto l240
-				}
-				position++
-				if !_rules[ruleSpacing]() {
-					goto l240
-				}
-				depth--
-				add(ruleAnd, position241)
-			}
-			return true
-		l240:
-			position, tokenIndex, depth = position240, tokenIndex240, depth240
-			return false
-		},
-		/* 23 Not <- <('!' Spacing)> */
-		func() bool {
-			position242, tokenIndex242, depth242 := position, tokenIndex, depth
-			{
-				position243 := position
-				depth++
-				if buffer[position] != rune('!') {
-					goto l242
-				}
-				position++
-				if !_rules[ruleSpacing]() {
-					goto l242
-				}
-				depth--
-				add(ruleNot, position243)
-			}
-			return true
-		l242:
-			position, tokenIndex, depth = position242, tokenIndex242, depth242
-			return false
-		},
-		/* 24 Question <- <('?' Spacing)> */
-		nil,
-		/* 25 Star <- <('*' Spacing)> */
-		nil,
-		/* 26 Plus <- <('+' Spacing)> */
-		nil,
-		/* 27 Open <- <('(' Spacing)> */
-		nil,
-		/* 28 Close <- <(')' Spacing)> */
-		nil,
-		/* 29 Dot <- <('.' Spacing)> */
-		nil,
-		/* 30 SpaceComment <- <(Space / Comment)> */
-		func() bool {
-			position250, tokenIndex250, depth250 := position, tokenIndex, depth
-			{
-				position251 := position
-				depth++
-				{
-					position252, tokenIndex252, depth252 := position, tokenIndex, depth
-					{
-						position254 := position
-						depth++
-						{
-							switch buffer[position] {
-							case '\t':
-								if buffer[position] != rune('\t') {
-									goto l253
-								}
-								position++
-								break
-							case ' ':
-								if buffer[position] != rune(' ') {
-									goto l253
-								}
-								position++
-								break
-							default:
-								if !_rules[ruleEndOfLine]() {
-									goto l253
-								}
-								break
-							}
-						}
-
-						depth--
-						add(ruleSpace, position254)
-					}
-					goto l252
-				l253:
-					position, tokenIndex, depth = position252, tokenIndex252, depth252
-					{
-						position256 := position
-						depth++
-						if buffer[position] != rune('#') {
-							goto l250
-						}
-						position++
-					l257:
-						{
-							position258, tokenIndex258, depth258 := position, tokenIndex, depth
-							{
-								position259, tokenIndex259, depth259 := position, tokenIndex, depth
-								if !_rules[ruleEndOfLine]() {
-									goto l259
-								}
-								goto l258
-							l259:
-								position, tokenIndex, depth = position259, tokenIndex259, depth259
-							}
-							if !matchDot() {
-								goto l258
-							}
-							goto l257
-						l258:
-							position, tokenIndex, depth = position258, tokenIndex258, depth258
-						}
-						if !_rules[ruleEndOfLine]() {
-							goto l250
-						}
-						depth--
-						add(ruleComment, position256)
-					}
-				}
-			l252:
-				depth--
-				add(ruleSpaceComment, position251)
-			}
-			return true
-		l250:
-			position, tokenIndex, depth = position250, tokenIndex250, depth250
-			return false
-		},
-		/* 31 Spacing <- <SpaceComment*> */
-		func() bool {
-			{
-				position261 := position
-				depth++
-			l262:
-				{
-					position263, tokenIndex263, depth263 := position, tokenIndex, depth
-					if !_rules[ruleSpaceComment]() {
-						goto l263
-					}
-					goto l262
-				l263:
-					position, tokenIndex, depth = position263, tokenIndex263, depth263
-				}
-				depth--
-				add(ruleSpacing, position261)
-			}
-			return true
-		},
-		/* 32 MustSpacing <- <SpaceComment+> */
-		func() bool {
-			position264, tokenIndex264, depth264 := position, tokenIndex, depth
-			{
-				position265 := position
-				depth++
-				if !_rules[ruleSpaceComment]() {
-					goto l264
-				}
-			l266:
-				{
-					position267, tokenIndex267, depth267 := position, tokenIndex, depth
-					if !_rules[ruleSpaceComment]() {
-						goto l267
-					}
-					goto l266
-				l267:
-					position, tokenIndex, depth = position267, tokenIndex267, depth267
-				}
-				depth--
-				add(ruleMustSpacing, position265)
-			}
-			return true
-		l264:
-			position, tokenIndex, depth = position264, tokenIndex264, depth264
-			return false
-		},
-		/* 33 Comment <- <('#' (!EndOfLine .)* EndOfLine)> */
-		nil,
-		/* 34 Space <- <((&('\t') '\t') | (&(' ') ' ') | (&('\n' | '\r') EndOfLine))> */
-		nil,
-		/* 35 EndOfLine <- <(('\r' '\n') / '\n' / '\r')> */
-		func() bool {
-			position270, tokenIndex270, depth270 := position, tokenIndex, depth
-			{
-				position271 := position
-				depth++
-				{
-					position272, tokenIndex272, depth272 := position, tokenIndex, depth
-					if buffer[position] != rune('\r') {
-						goto l273
-					}
-					position++
-					if buffer[position] != rune('\n') {
-						goto l273
-					}
-					position++
-					goto l272
-				l273:
-					position, tokenIndex, depth = position272, tokenIndex272, depth272
-					if buffer[position] != rune('\n') {
-						goto l274
-					}
-					position++
-					goto l272
-				l274:
-					position, tokenIndex, depth = position272, tokenIndex272, depth272
-					if buffer[position] != rune('\r') {
-						goto l270
-					}
-					position++
-				}
-			l272:
-				depth--
-				add(ruleEndOfLine, position271)
-			}
-			return true
-		l270:
-			position, tokenIndex, depth = position270, tokenIndex270, depth270
-			return false
-		},
-		/* 36 EndOfFile <- <!.> */
-		nil,
-		/* 37 Action <- <('{' <ActionBody*> '}' Spacing)> */
-		func() bool {
-			position276, tokenIndex276, depth276 := position, tokenIndex, depth
-			{
-				position277 := position
-				depth++
-				if buffer[position] != rune('{') {
-					goto l276
-				}
-				position++
-				{
-					position278 := position
-					depth++
-				l279:
-					{
-						position280, tokenIndex280, depth280 := position, tokenIndex, depth
-						if !_rules[ruleActionBody]() {
-							goto l280
-						}
-						goto l279
-					l280:
-						position, tokenIndex, depth = position280, tokenIndex280, depth280
-					}
-					depth--
-					add(rulePegText, position278)
-				}
-				if buffer[position] != rune('}') {
-					goto l276
-				}
-				position++
-				if !_rules[ruleSpacing]() {
-					goto l276
-				}
-				depth--
-				add(ruleAction, position277)
-			}
-			return true
-		l276:
-			position, tokenIndex, depth = position276, tokenIndex276, depth276
-			return false
-		},
-		/* 38 ActionBody <- <((!('{' / '}') .) / ('{' ActionBody* '}'))> */
-		func() bool {
-			position281, tokenIndex281, depth281 := position, tokenIndex, depth
-			{
-				position282 := position
-				depth++
-				{
-					position283, tokenIndex283, depth283 := position, tokenIndex, depth
-					{
-						position285, tokenIndex285, depth285 := position, tokenIndex, depth
-						{
-							position286, tokenIndex286, depth286 := position, tokenIndex, depth
-							if buffer[position] != rune('{') {
-								goto l287
-							}
-							position++
-							goto l286
-						l287:
-							position, tokenIndex, depth = position286, tokenIndex286, depth286
-							if buffer[position] != rune('}') {
-								goto l285
-							}
-							position++
-						}
-					l286:
-						goto l284
-					l285:
-						position, tokenIndex, depth = position285, tokenIndex285, depth285
-					}
-					if !matchDot() {
-						goto l284
-					}
-					goto l283
-				l284:
-					position, tokenIndex, depth = position283, tokenIndex283, depth283
-					if buffer[position] != rune('{') {
-						goto l281
-					}
-					position++
-				l288:
-					{
-						position289, tokenIndex289, depth289 := position, tokenIndex, depth
-						if !_rules[ruleActionBody]() {
-							goto l289
-						}
-						goto l288
-					l289:
-						position, tokenIndex, depth = position289, tokenIndex289, depth289
-					}
-					if buffer[position] != rune('}') {
-						goto l281
-					}
-					position++
-				}
-			l283:
-				depth--
-				add(ruleActionBody, position282)
-			}
-			return true
-		l281:
-			position, tokenIndex, depth = position281, tokenIndex281, depth281
-			return false
-		},
-		/* 39 Begin <- <('<' Spacing)> */
-		nil,
-		/* 40 End <- <('>' Spacing)> */
-		nil,
-		/* 42 Action0 <- <{ p.AddPackage(text) }> */
-		nil,
-		/* 43 Action1 <- <{ p.AddPeg(text) }> */
-		nil,
-		/* 44 Action2 <- <{ p.AddState(text) }> */
-		nil,
-		nil,
-		/* 46 Action3 <- <{ p.AddImport(text) }> */
-		nil,
-		/* 47 Action4 <- <{ p.AddRule(text) }> */
-		nil,
-		/* 48 Action5 <- <{ p.AddExpression() }> */
-		nil,
-		/* 49 Action6 <- <{ p.AddAlternate() }> */
-		nil,
-		/* 50 Action7 <- <{ p.AddNil(); p.AddAlternate() }> */
-		nil,
-		/* 51 Action8 <- <{ p.AddNil() }> */
-		nil,
-		/* 52 Action9 <- <{ p.AddSequence() }> */
-		nil,
-		/* 53 Action10 <- <{ p.AddPredicate(text) }> */
-		nil,
-		/* 54 Action11 <- <{ p.AddStateChange(text) }> */
-		nil,
-		/* 55 Action12 <- <{ p.AddPeekFor() }> */
-		nil,
-		/* 56 Action13 <- <{ p.AddPeekNot() }> */
-		nil,
-		/* 57 Action14 <- <{ p.AddQuery() }> */
-		nil,
-		/* 58 Action15 <- <{ p.AddStar() }> */
-		nil,
-		/* 59 Action16 <- <{ p.AddPlus() }> */
-		nil,
-		/* 60 Action17 <- <{ p.AddName(text) }> */
-		nil,
-		/* 61 Action18 <- <{ p.AddDot() }> */
-		nil,
-		/* 62 Action19 <- <{ p.AddAction(text) }> */
-		nil,
-		/* 63 Action20 <- <{ p.AddPush() }> */
-		nil,
-		/* 64 Action21 <- <{ p.AddSequence() }> */
-		nil,
-		/* 65 Action22 <- <{ p.AddSequence() }> */
-		nil,
-		/* 66 Action23 <- <{ p.AddPeekNot(); p.AddDot(); p.AddSequence() }> */
-		nil,
-		/* 67 Action24 <- <{ p.AddPeekNot(); p.AddDot(); p.AddSequence() }> */
-		nil,
-		/* 68 Action25 <- <{ p.AddAlternate() }> */
-		nil,
-		/* 69 Action26 <- <{ p.AddAlternate() }> */
-		nil,
-		/* 70 Action27 <- <{ p.AddRange() }> */
-		nil,
-		/* 71 Action28 <- <{ p.AddDoubleRange() }> */
-		nil,
-		/* 72 Action29 <- <{ p.AddCharacter(text) }> */
-		nil,
-		/* 73 Action30 <- <{ p.AddDoubleCharacter(text) }> */
-		nil,
-		/* 74 Action31 <- <{ p.AddCharacter(text) }> */
-		nil,
-		/* 75 Action32 <- <{ p.AddCharacter("\a") }> */
-		nil,
-		/* 76 Action33 <- <{ p.AddCharacter("\b") }> */
-		nil,
-		/* 77 Action34 <- <{ p.AddCharacter("\x1B") }> */
-		nil,
-		/* 78 Action35 <- <{ p.AddCharacter("\f") }> */
-		nil,
-		/* 79 Action36 <- <{ p.AddCharacter("\n") }> */
-		nil,
-		/* 80 Action37 <- <{ p.AddCharacter("\r") }> */
-		nil,
-		/* 81 Action38 <- <{ p.AddCharacter("\t") }> */
-		nil,
-		/* 82 Action39 <- <{ p.AddCharacter("\v") }> */
-		nil,
-		/* 83 Action40 <- <{ p.AddCharacter("'") }> */
-		nil,
-		/* 84 Action41 <- <{ p.AddCharacter("\"") }> */
-		nil,
-		/* 85 Action42 <- <{ p.AddCharacter("[") }> */
-		nil,
-		/* 86 Action43 <- <{ p.AddCharacter("]") }> */
-		nil,
-		/* 87 Action44 <- <{ p.AddCharacter("-") }> */
-		nil,
-		/* 88 Action45 <- <{ p.AddHexaCharacter(text) }> */
-		nil,
-		/* 89 Action46 <- <{ p.AddOctalCharacter(text) }> */
-		nil,
-		/* 90 Action47 <- <{ p.AddOctalCharacter(text) }> */
-		nil,
-		/* 91 Action48 <- <{ p.AddCharacter("\\") }> */
-		nil,
-	}
-	p.rules = _rules
-}
diff --git a/bootstrap/main.go b/bootstrap/main.go
index be5fc4c..08e9f4f 100644
--- a/bootstrap/main.go
+++ b/bootstrap/main.go
@@ -8,11 +8,13 @@ import (
 	"fmt"
 	"os"
 	"runtime"
+
+	"github.com/pointlander/peg/tree"
 )
 
 func main() {
 	runtime.GOMAXPROCS(2)
-	t := New(true, true)
+	t := tree.New(true, true, false)
 
 	/*package main
 
@@ -25,16 +27,15 @@ func main() {
 	   *Tree
 	  }*/
 	t.AddPackage("main")
+	t.AddImport("github.com/pointlander/peg/tree")
 	t.AddPeg("Peg")
 	t.AddState(`
- *Tree
+ *tree.Tree
 `)
 
 	addDot := t.AddDot
 	addName := t.AddName
 	addCharacter := t.AddCharacter
-	addDoubleCharacter := t.AddDoubleCharacter
-	addHexaCharacter := t.AddHexaCharacter
 	addAction := t.AddAction
 
 	addRule := func(name string, item func()) {
@@ -85,22 +86,11 @@ func main() {
 		t.AddRange()
 	}
 
-	addDoubleRange := func(begin, end string) {
-		addCharacter(begin)
-		addCharacter(end)
-		t.AddDoubleRange()
-	}
-
 	addStar := func(item func()) {
 		item()
 		t.AddStar()
 	}
 
-	addPlus := func(item func()) {
-		item()
-		t.AddPlus()
-	}
-
 	addQuery := func(item func()) {
 		item()
 		t.AddQuery()
@@ -121,55 +111,17 @@ func main() {
 		t.AddPeekFor()
 	}
 
-	/* Grammar         <- Spacing 'package' MustSpacing Identifier      { p.AddPackage(text) }
-	   Import*
-	   'type' MustSpacing Identifier         { p.AddPeg(text) }
-	   'Peg' Spacing Action              { p.AddState(text) }
-	   Definition+ EndOfFile */
+	/* Grammar <- Spacing { hdr; } Action* Definition* !. */
 	addRule("Grammar", func() {
 		addSequence(
 			func() { addName("Spacing") },
-			func() { addString("package") },
-			func() { addName("MustSpacing") },
-			func() { addName("Identifier") },
-			func() { addAction(" p.AddPackage(text) ") },
-			func() { addStar(func() { addName("Import") }) },
-			func() { addString("type") },
-			func() { addName("MustSpacing") },
-			func() { addName("Identifier") },
-			func() { addAction(" p.AddPeg(text) ") },
-			func() { addString("Peg") },
-			func() { addName("Spacing") },
-			func() { addName("Action") },
-			func() { addAction(" p.AddState(text) ") },
-			func() { addPlus(func() { addName("Definition") }) },
-			func() { addName("EndOfFile") },
-		)
-	})
-
-	/* Import          <- 'import' Spacing ["] < [a-zA-Z_/.\-]+ > ["] Spacing { p.AddImport(text) } */
-	addRule("Import", func() {
-		addSequence(
-			func() { addString("import") },
-			func() { addName("Spacing") },
-			func() { addCharacter(`"`) },
-			func() {
-				addPush(func() {
-					addPlus(func() {
-						addAlternate(
-							func() { addRange(`a`, `z`) },
-							func() { addRange(`A`, `Z`) },
-							func() { addCharacter(`_`) },
-							func() { addCharacter(`/`) },
-							func() { addCharacter(`.`) },
-							func() { addCharacter(`-`) },
-						)
-					})
-				})
-			},
-			func() { addCharacter(`"`) },
-			func() { addName("Spacing") },
-			func() { addAction(" p.AddImport(text) ") },
+			func() { addAction(`p.AddPackage("main")`) },
+			func() { addAction(`p.AddImport("github.com/pointlander/peg/tree")`) },
+			func() { addAction(`p.AddPeg("Peg")`) },
+			func() { addAction(`p.AddState("*tree.Tree")`) },
+			func() { addStar(func() { addName("Action") }) },
+			func() { addStar(func() { addName("Definition") }) },
+			func() { addPeekNot(func() { addDot() }) },
 		)
 	})
 
@@ -198,40 +150,23 @@ func main() {
 		)
 	})
 
-	/* Expression      <- Sequence (Slash Sequence     { p.AddAlternate() }
-	           )* (Slash           { p.AddNil(); p.AddAlternate() }
-	              )?
-	/ { p.AddNil() } */
+	/* Expression <- Sequence (Slash Sequence { p.AddAlternate() })* */
 	addRule("Expression", func() {
-		addAlternate(
+		addSequence(
+			func() { addName("Sequence") },
 			func() {
-				addSequence(
-					func() { addName("Sequence") },
-					func() {
-						addStar(func() {
-							addSequence(
-								func() { addName("Slash") },
-								func() { addName("Sequence") },
-								func() { addAction(" p.AddAlternate() ") },
-							)
-						})
-					},
-					func() {
-						addQuery(func() {
-							addSequence(
-								func() { addName("Slash") },
-								func() { addAction(" p.AddNil(); p.AddAlternate() ") },
-							)
-						})
-					},
-				)
+				addStar(func() {
+					addSequence(
+						func() { addName("Slash") },
+						func() { addName("Sequence") },
+						func() { addAction(" p.AddAlternate() ") },
+					)
+				})
 			},
-			func() { addAction(" p.AddNil() ") },
 		)
 	})
 
-	/* Sequence        <- Prefix (Prefix               { p.AddSequence() }
-	   )* */
+	/* Sequence <- Prefix (Prefix { p.AddSequence() } )* */
 	addRule("Sequence", func() {
 		addSequence(
 			func() { addName("Prefix") },
@@ -246,37 +181,12 @@ func main() {
 		)
 	})
 
-	/* Prefix          <- And Action                   { p.AddPredicate(text) }
-	   / Not Action                   { p.AddStateChange(text) }
-	   / And Suffix                   { p.AddPeekFor() }
-	   / Not Suffix                   { p.AddPeekNot() }
-	   /     Suffix */
+	/* Prefix <- '!' Suffix { p.AddPeekNot() } / Suffix */
 	addRule("Prefix", func() {
 		addAlternate(
 			func() {
 				addSequence(
-					func() { addName("And") },
-					func() { addName("Action") },
-					func() { addAction(" p.AddPredicate(text) ") },
-				)
-			},
-			func() {
-				addSequence(
-					func() { addName("Not") },
-					func() { addName("Action") },
-					func() { addAction(" p.AddStateChange(text) ") },
-				)
-			},
-			func() {
-				addSequence(
-					func() { addName("And") },
-					func() { addName("Suffix") },
-					func() { addAction(" p.AddPeekFor() ") },
-				)
-			},
-			func() {
-				addSequence(
-					func() { addName("Not") },
+					func() { addCharacter(`!`) },
 					func() { addName("Suffix") },
 					func() { addAction(" p.AddPeekNot() ") },
 				)
@@ -285,10 +195,9 @@ func main() {
 		)
 	})
 
-	/* Suffix          <- Primary (Question            { p.AddQuery() }
-	   / Star             { p.AddStar() }
-	   / Plus             { p.AddPlus() }
-	 )? */
+	/* Suffix          <- Primary (	Question	{ p.AddQuery() }
+	  				/ Star		{ p.AddStar() }
+	)? */
 	addRule("Suffix", func() {
 		addSequence(
 			func() { addName("Primary") },
@@ -307,12 +216,6 @@ func main() {
 								func() { addAction(" p.AddStar() ") },
 							)
 						},
-						func() {
-							addSequence(
-								func() { addName("Plus") },
-								func() { addAction(" p.AddPlus() ") },
-							)
-						},
 					)
 				})
 			},
@@ -367,14 +270,14 @@ func main() {
 		)
 	})
 
-	/* Identifier      <- < IdentStart IdentCont* > Spacing */
+	/* Identifier      <- < Ident Ident* > Spacing */
 	addRule("Identifier", func() {
 		addSequence(
 			func() {
 				addPush(func() {
 					addSequence(
-						func() { addName("IdentStart") },
-						func() { addStar(func() { addName("IdentCont") }) },
+						func() { addName("Ident") },
+						func() { addStar(func() { addName("Ident") }) },
 					)
 				})
 			},
@@ -382,169 +285,54 @@ func main() {
 		)
 	})
 
-	/* IdentStart      <- [[a-z_]] */
-	addRule("IdentStart", func() {
-		addAlternate(
-			func() { addDoubleRange(`a`, `z`) },
-			func() { addCharacter(`_`) },
-		)
-	})
-
-	/* IdentCont       <- IdentStart / [0-9] */
-	addRule("IdentCont", func() {
+	/* Ident <- [A-Za-z] */
+	addRule("Ident", func() {
 		addAlternate(
-			func() { addName("IdentStart") },
-			func() { addRange(`0`, `9`) },
+			func() { addRange(`A`, `Z`) },
+			func() { addRange(`a`, `z`) },
 		)
 	})
 
-	/* Literal         <- ['] (!['] Char)? (!['] Char          { p.AddSequence() }
-	                     )* ['] Spacing
-	   / ["] (!["] DoubleChar)? (!["] DoubleChar          { p.AddSequence() }
-	                            )* ["] Spacing */
+	/* Literal <- ['] !['] Char (!['] Char { p.AddSequence() } )* ['] Spacing */
 	addRule("Literal", func() {
-		addAlternate(
-			func() {
-				addSequence(
-					func() { addCharacter(`'`) },
-					func() {
-						addQuery(func() {
-							addSequence(
-								func() { addPeekNot(func() { addCharacter(`'`) }) },
-								func() { addName("Char") },
-							)
-						})
-					},
-					func() {
-						addStar(func() {
-							addSequence(
-								func() { addPeekNot(func() { addCharacter(`'`) }) },
-								func() { addName("Char") },
-								func() { addAction(` p.AddSequence() `) },
-							)
-						})
-					},
-					func() { addCharacter(`'`) },
-					func() { addName("Spacing") },
-				)
-			},
-			func() {
-				addSequence(
-					func() { addCharacter(`"`) },
-					func() {
-						addQuery(func() {
-							addSequence(
-								func() { addPeekNot(func() { addCharacter(`"`) }) },
-								func() { addName("DoubleChar") },
-							)
-						})
-					},
-					func() {
-						addStar(func() {
-							addSequence(
-								func() { addPeekNot(func() { addCharacter(`"`) }) },
-								func() { addName("DoubleChar") },
-								func() { addAction(` p.AddSequence() `) },
-							)
-						})
-					},
-					func() { addCharacter(`"`) },
-					func() { addName("Spacing") },
-				)
-			},
-		)
-	})
-
-	/* Class  <- ( '[[' ( '^' DoubleRanges              { p.AddPeekNot(); p.AddDot(); p.AddSequence() }
-	          / DoubleRanges )?
-	     ']]'
-	   / '[' ( '^' Ranges                     { p.AddPeekNot(); p.AddDot(); p.AddSequence() }
-	         / Ranges )?
-	     ']' )
-	   Spacing */
-	addRule("Class", func() {
 		addSequence(
+			func() { addCharacter(`'`) },
 			func() {
-				addAlternate(
-					func() {
-						addSequence(
-							func() { addString(`[[`) },
-							func() {
-								addQuery(func() {
-									addAlternate(
-										func() {
-											addSequence(
-												func() { addCharacter(`^`) },
-												func() { addName("DoubleRanges") },
-												func() { addAction(` p.AddPeekNot(); p.AddDot(); p.AddSequence() `) },
-											)
-										},
-										func() { addName("DoubleRanges") },
-									)
-								})
-							},
-							func() { addString(`]]`) },
-						)
-					},
-					func() {
-						addSequence(
-							func() { addCharacter(`[`) },
-							func() {
-								addQuery(func() {
-									addAlternate(
-										func() {
-											addSequence(
-												func() { addCharacter(`^`) },
-												func() { addName("Ranges") },
-												func() { addAction(` p.AddPeekNot(); p.AddDot(); p.AddSequence() `) },
-											)
-										},
-										func() { addName("Ranges") },
-									)
-								})
-							},
-							func() { addCharacter(`]`) },
-						)
-					},
+				addSequence(
+					func() { addPeekNot(func() { addCharacter(`'`) }) },
+					func() { addName("Char") },
 				)
 			},
-			func() { addName("Spacing") },
-		)
-	})
-
-	/* Ranges          <- !']' Range (!']' Range  { p.AddAlternate() }
-	   )* */
-	addRule("Ranges", func() {
-		addSequence(
-			func() { addPeekNot(func() { addCharacter(`]`) }) },
-			func() { addName("Range") },
 			func() {
 				addStar(func() {
 					addSequence(
-						func() { addPeekNot(func() { addCharacter(`]`) }) },
-						func() { addName("Range") },
-						func() { addAction(" p.AddAlternate() ") },
+						func() { addPeekNot(func() { addCharacter(`'`) }) },
+						func() { addName("Char") },
+						func() { addAction(` p.AddSequence() `) },
 					)
 				})
 			},
+			func() { addCharacter(`'`) },
+			func() { addName("Spacing") },
 		)
 	})
 
-	/* DoubleRanges          <- !']]' DoubleRange (!']]' DoubleRange  { p.AddAlternate() }
-	   )* */
-	addRule("DoubleRanges", func() {
+	/* Class  <- '[' Range (!']' Range { p.AddAlternate() })* ']' Spacing */
+	addRule("Class", func() {
 		addSequence(
-			func() { addPeekNot(func() { addString(`]]`) }) },
-			func() { addName("DoubleRange") },
+			func() { addCharacter(`[`) },
+			func() { addName("Range") },
 			func() {
 				addStar(func() {
 					addSequence(
-						func() { addPeekNot(func() { addString(`]]`) }) },
-						func() { addName("DoubleRange") },
+						func() { addPeekNot(func() { addCharacter(`]`) }) },
+						func() { addName("Range") },
 						func() { addAction(" p.AddAlternate() ") },
 					)
 				})
 			},
+			func() { addCharacter(`]`) },
+			func() { addName("Spacing") },
 		)
 	})
 
@@ -564,192 +352,23 @@ func main() {
 		)
 	})
 
-	/* DoubleRange      <- Char '-' Char { p.AddDoubleRange() }
-	   / DoubleChar */
-	addRule("DoubleRange", func() {
-		addAlternate(
-			func() {
-				addSequence(
-					func() { addName("Char") },
-					func() { addCharacter(`-`) },
-					func() { addName("Char") },
-					func() { addAction(" p.AddDoubleRange() ") },
-				)
-			},
-			func() { addName("DoubleChar") },
-		)
-	})
-
-	/* Char            <- Escape
-	   / !'\\' <.>                  { p.AddCharacter(text) } */
+	/* Char	<- Escape
+	/  '\\' "0x"<[0-9a-f]*>   { p.AddHexaCharacter(text) }
+	/  '\\\\'                  { p.AddCharacter("\\") }
+	/  !'\\' <.>                  { p.AddCharacter(text) } */
 	addRule("Char", func() {
 		addAlternate(
-			func() { addName("Escape") },
-			func() {
-				addSequence(
-					func() { addPeekNot(func() { addCharacter("\\") }) },
-					func() { addPush(func() { addDot() }) },
-					func() { addAction(` p.AddCharacter(text) `) },
-				)
-			},
-		)
-	})
-
-	/* DoubleChar      <- Escape
-	   / <[a-zA-Z]>                 { p.AddDoubleCharacter(text) }
-	   / !'\\' <.>                  { p.AddCharacter(text) } */
-	addRule("DoubleChar", func() {
-		addAlternate(
-			func() { addName("Escape") },
-			func() {
-				addSequence(
-					func() {
-						addPush(func() {
-							addAlternate(
-								func() { addRange(`a`, `z`) },
-								func() { addRange(`A`, `Z`) },
-							)
-						})
-					},
-					func() { addAction(` p.AddDoubleCharacter(text) `) },
-				)
-			},
-			func() {
-				addSequence(
-					func() { addPeekNot(func() { addCharacter("\\") }) },
-					func() { addPush(func() { addDot() }) },
-					func() { addAction(` p.AddCharacter(text) `) },
-				)
-			},
-		)
-	})
-
-	/* Escape            <- "\\a"                      { p.AddCharacter("\a") }   # bell
-		                      / "\\b"                      { p.AddCharacter("\b") }   # bs
-	                              / "\\e"                      { p.AddCharacter("\x1B") } # esc
-	                              / "\\f"                      { p.AddCharacter("\f") }   # ff
-	                              / "\\n"                      { p.AddCharacter("\n") }   # nl
-	                              / "\\r"                      { p.AddCharacter("\r") }   # cr
-	                              / "\\t"                      { p.AddCharacter("\t") }   # ht
-	                              / "\\v"                      { p.AddCharacter("\v") }   # vt
-	                              / "\\'"                      { p.AddCharacter("'") }
-	                              / '\\"'                      { p.AddCharacter("\"") }
-	                              / '\\['                      { p.AddCharacter("[") }
-	                              / '\\]'                      { p.AddCharacter("]") }
-	                              / '\\-'                      { p.AddCharacter("-") }
-				      / '\\' "0x"<[0-9a-fA-F]+>    { p.AddHexaCharacter(text) }
-	                              / '\\' <[0-3][0-7][0-7]>     { p.AddOctalCharacter(text) }
-	                              / '\\' <[0-7][0-7]?>         { p.AddOctalCharacter(text) }
-	                              / '\\\\'                     { p.AddCharacter("\\") } */
-	addRule("Escape", func() {
-		addAlternate(
-			func() {
-				addSequence(
-					func() { addCharacter("\\") },
-					func() { addDoubleCharacter(`a`) },
-					func() { addAction(` p.AddCharacter("\a") `) },
-				)
-			},
-			func() {
-				addSequence(
-					func() { addCharacter("\\") },
-					func() { addDoubleCharacter(`b`) },
-					func() { addAction(` p.AddCharacter("\b") `) },
-				)
-			},
-			func() {
-				addSequence(
-					func() { addCharacter("\\") },
-					func() { addDoubleCharacter(`e`) },
-					func() { addAction(` p.AddCharacter("\x1B") `) },
-				)
-			},
-			func() {
-				addSequence(
-					func() { addCharacter("\\") },
-					func() { addDoubleCharacter(`f`) },
-					func() { addAction(` p.AddCharacter("\f") `) },
-				)
-			},
-			func() {
-				addSequence(
-					func() { addCharacter("\\") },
-					func() { addDoubleCharacter(`n`) },
-					func() { addAction(` p.AddCharacter("\n") `) },
-				)
-			},
 			func() {
 				addSequence(
 					func() { addCharacter("\\") },
-					func() { addDoubleCharacter(`r`) },
-					func() { addAction(` p.AddCharacter("\r") `) },
-				)
-			},
-			func() {
-				addSequence(
-					func() { addCharacter("\\") },
-					func() { addDoubleCharacter(`t`) },
-					func() { addAction(` p.AddCharacter("\t") `) },
-				)
-			},
-			func() {
-				addSequence(
-					func() { addCharacter("\\") },
-					func() { addDoubleCharacter(`v`) },
-					func() { addAction(` p.AddCharacter("\v") `) },
-				)
-			},
-			func() {
-				addSequence(
-					func() { addCharacter("\\") },
-					func() { addCharacter(`'`) },
-					func() { addAction(` p.AddCharacter("'") `) },
-				)
-			},
-			func() {
-				addSequence(
-					func() { addCharacter("\\") },
-					func() { addCharacter(`"`) },
-					func() { addAction(` p.AddCharacter("\"") `) },
-				)
-			},
-			func() {
-				addSequence(
-					func() { addCharacter("\\") },
-					func() { addCharacter(`[`) },
-					func() { addAction(` p.AddCharacter("[") `) },
-				)
-			},
-			func() {
-				addSequence(
-					func() { addCharacter("\\") },
-					func() { addCharacter(`]`) },
-					func() { addAction(` p.AddCharacter("]") `) },
-				)
-			},
-			func() {
-				addSequence(
-					func() { addCharacter("\\") },
-					func() { addCharacter(`-`) },
-					func() { addAction(` p.AddCharacter("-") `) },
-				)
-			},
-			func() {
-				addSequence(
-					func() { addCharacter("\\") },
-					func() {
-						addSequence(
-							func() { addCharacter(`0`) },
-							func() { addDoubleCharacter(`x`) },
-						)
-					},
+					func() { addCharacter(`0`) },
+					func() { addCharacter(`x`) },
 					func() {
 						addPush(func() {
-							addPlus(func() {
+							addStar(func() {
 								addAlternate(
 									func() { addRange(`0`, `9`) },
 									func() { addRange(`a`, `f`) },
-									func() { addRange(`A`, `F`) },
 								)
 							})
 						})
@@ -760,51 +379,23 @@ func main() {
 			func() {
 				addSequence(
 					func() { addCharacter("\\") },
-					func() {
-						addPush(func() {
-							addSequence(
-								func() { addRange(`0`, `3`) },
-								func() { addRange(`0`, `7`) },
-								func() { addRange(`0`, `7`) },
-							)
-						})
-					},
-					func() { addAction(` p.AddOctalCharacter(text) `) },
-				)
-			},
-			func() {
-				addSequence(
 					func() { addCharacter("\\") },
-					func() {
-						addPush(func() {
-							addSequence(
-								func() { addRange(`0`, `7`) },
-								func() { addQuery(func() { addRange(`0`, `7`) }) },
-							)
-						})
-					},
-					func() { addAction(` p.AddOctalCharacter(text) `) },
+					func() { addAction(` p.AddCharacter("\\") `) },
 				)
 			},
 			func() {
 				addSequence(
-					func() { addCharacter("\\") },
-					func() { addCharacter("\\") },
-					func() { addAction(` p.AddCharacter("\\") `) },
+					func() { addPeekNot(func() { addCharacter("\\") }) },
+					func() { addPush(func() { addDot() }) },
+					func() { addAction(` p.AddCharacter(text) `) },
 				)
 			},
 		)
 	})
-
-	/* LeftArrow       <- ('<-' / '\0x2190') Spacing */
+	/* LeftArrow       <- '<-' Spacing */
 	addRule("LeftArrow", func() {
 		addSequence(
-			func() {
-				addAlternate(
-					func() { addString(`<-`) },
-					func() { addHexaCharacter("2190") },
-				)
-			},
+			func() { addString(`<-`) },
 			func() { addName("Spacing") },
 		)
 	})
@@ -817,22 +408,6 @@ func main() {
 		)
 	})
 
-	/* And             <- '&' Spacing */
-	addRule("And", func() {
-		addSequence(
-			func() { addCharacter(`&`) },
-			func() { addName("Spacing") },
-		)
-	})
-
-	/* Not             <- '!' Spacing */
-	addRule("Not", func() {
-		addSequence(
-			func() { addCharacter(`!`) },
-			func() { addName("Spacing") },
-		)
-	})
-
 	/* Question        <- '?' Spacing */
 	addRule("Question", func() {
 		addSequence(
@@ -849,14 +424,6 @@ func main() {
 		)
 	})
 
-	/* Plus            <- '+' Spacing */
-	addRule("Plus", func() {
-		addSequence(
-			func() { addCharacter(`+`) },
-			func() { addName("Spacing") },
-		)
-	})
-
 	/* Open            <- '(' Spacing */
 	addRule("Open", func() {
 		addSequence(
@@ -881,25 +448,16 @@ func main() {
 		)
 	})
 
-	/* SpaceComment         <- (Space / Comment) */
-	addRule("SpaceComment", func() {
-		addAlternate(
-			func() { addName("Space") },
-			func() { addName("Comment") },
-		)
-	})
-
-	/* Spacing         <- SpaceComment* */
 	addRule("Spacing", func() {
-		addStar(func() { addName("SpaceComment") })
+		addStar(func() {
+			addAlternate(
+				func() { addName("Space") },
+				func() { addName("Comment") },
+			)
+		})
 	})
 
-	/* MustSpacing     <- SpaceComment+ */
-	addRule("MustSpacing", func() {
-		addPlus(func() { t.AddName("SpaceComment") })
-	})
-
-	/* Comment         <- '#' (!EndOfLine .)* EndOfLine */
+	/* Comment         <- '#' (!EndOfLine .)* */
 	addRule("Comment", func() {
 		addSequence(
 			func() { addCharacter(`#`) },
@@ -911,7 +469,6 @@ func main() {
 					)
 				})
 			},
-			func() { addName("EndOfLine") },
 		)
 	})
 
@@ -933,18 +490,22 @@ func main() {
 		)
 	})
 
-	/* EndOfFile       <- !. */
-	addRule("EndOfFile", func() {
-		addPeekNot(func() { addDot() })
-	})
-
-	/* Action		<- '{' < ActionBody* > '}' Spacing */
+	/* Action		<- '{' < (![}].)* > '}' Spacing */
 	addRule("Action", func() {
 		addSequence(
 			func() { addCharacter(`{`) },
 			func() {
 				addPush(func() {
-					addStar(func() { addName("ActionBody") })
+					addStar(func() {
+						addSequence(
+							func() {
+								addPeekNot(func() {
+									addCharacter(`}`)
+								})
+							},
+							func() { addDot() },
+						)
+					})
 				})
 			},
 			func() { addCharacter(`}`) },
@@ -952,32 +513,6 @@ func main() {
 		)
 	})
 
-	/* ActionBody	<- [^{}] / '{' ActionBody* '}' */
-	addRule("ActionBody", func() {
-		addAlternate(
-			func() {
-				addSequence(
-					func() {
-						addPeekNot(func() {
-							addAlternate(
-								func() { addCharacter(`{`) },
-								func() { addCharacter(`}`) },
-							)
-						})
-					},
-					func() { addDot() },
-				)
-			},
-			func() {
-				addSequence(
-					func() { addCharacter(`{`) },
-					func() { addStar(func() { addName("ActionBody") }) },
-					func() { addCharacter(`}`) },
-				)
-			},
-		)
-	})
-
 	/* Begin           <- '<' Spacing */
 	addRule("Begin", func() {
 		addSequence(
@@ -995,11 +530,11 @@ func main() {
 	})
 
 	filename := "bootstrap.peg.go"
-	out, error := os.OpenFile(filename, os.O_RDWR|os.O_CREATE|os.O_TRUNC, 0644)
-	if error != nil {
-		fmt.Printf("%v: %v\n", filename, error)
+	out, err := os.OpenFile(filename, os.O_RDWR|os.O_CREATE|os.O_TRUNC, 0644)
+	if err != nil {
+		fmt.Printf("%v: %v\n", filename, err)
 		return
 	}
 	defer out.Close()
-	t.Compile(filename, out)
+	t.Compile(filename, os.Args, out)
 }
diff --git a/bootstrap/peg.go b/bootstrap/peg.go
deleted file mode 120000
index 9bf17c7..0000000
--- a/bootstrap/peg.go
+++ /dev/null
@@ -1 +0,0 @@
-../peg.go
\ No newline at end of file
diff --git a/build.go b/build.go
new file mode 100644
index 0000000..c1fc114
--- /dev/null
+++ b/build.go
@@ -0,0 +1,465 @@
+// Copyright 2010 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// +build ignore
+
+package main
+
+import (
+	"flag"
+	"fmt"
+	"io/ioutil"
+	"log"
+	"os"
+	"os/exec"
+	"path/filepath"
+	"reflect"
+	"runtime"
+	"strings"
+	"text/template"
+	"time"
+)
+
+func main() {
+	flag.Parse()
+
+	args, target := flag.Args(), "peg"
+	if len(args) > 0 {
+		target = args[0]
+	}
+
+	switch target {
+	case "buildinfo":
+		buildinfo()
+	case "peg":
+		peg()
+	case "clean":
+		clean()
+	case "test":
+		test()
+	case "bench":
+		bench()
+	case "help":
+		fmt.Println("go run build.go [target]")
+		fmt.Println(" peg - build peg from scratch")
+		fmt.Println(" clean - clean up")
+		fmt.Println(" test - run full test")
+		fmt.Println(" bench - run benchmark")
+		fmt.Println(" buildinfo - generate buildinfo.go")
+	}
+}
+
+const BuildinfoTemplate = `// Code Generated by "build.go buildinfo"  DO NOT EDIT.
+package main
+
+const (
+	// VERSION is the version of peg
+	VERSION   = "{{.Version}}"
+	// BUILDTIME is the build time of peg
+	BUILDTIME = "{{.Buildtime}}"
+	// COMMIT is the commit hash of peg
+	COMMIT    = "{{.Commit}}"
+	// IS_TAGGED is there a version
+	IS_TAGGED = {{.IsTagged}}
+)
+`
+
+func buildinfo() {
+	log.SetPrefix("buildinfo:")
+	type info struct {
+		Version   string
+		Buildtime string
+		Commit    string
+		IsTagged  bool
+	}
+	infFile, err := os.Create("buildinfo.go")
+	defer infFile.Close()
+	if err != nil {
+		log.Println("open buildinfo.go: fatal:", err)
+	}
+	var inf info = info{
+		Version: "unknown", // show this if we can't get the version
+	}
+	vers, err := exec.Command("git", "tag", "--contains").Output()
+	if err != nil {
+		log.Println("error:", err)
+	} else if len(vers) > 1 { // ignore any single newlines that might exist
+		inf.IsTagged = true
+		inf.Version = strings.TrimSuffix(string(vers), "\n")
+	} else {
+		vers, err = exec.Command("git", "tag", "--merged", "--sort=v:refname").Output()
+		if err != nil {
+			log.Println("error:", err)
+		} else if len(vers) > 1 {
+			tags := strings.Split(string(vers), "\n")
+			inf.Version = tags[len(tags)-1]
+		}
+	}
+
+	cmit, err := exec.Command("git", "rev-parse", "HEAD").Output()
+	if err != nil {
+		log.Println("error:", err)
+	}
+	inf.Commit = strings.TrimSuffix(string(cmit), "\n")
+	// slice the constant to remove the timezone specifier
+	inf.Buildtime = time.Now().UTC().Format(time.RFC3339[0:19])
+
+	err = template.Must(template.New("buildinfo").Parse(BuildinfoTemplate)).Execute(infFile, inf)
+	if err != nil {
+		log.Println("error: template:", err)
+	}
+	log.SetPrefix("")
+}
+
+var processed = make(map[string]bool)
+
+func done(file string, deps ...interface{}) bool {
+	fini := true
+	file = filepath.FromSlash(file)
+	info, err := os.Stat(file)
+	if err != nil {
+		fini = false
+	}
+	for _, dep := range deps {
+		switch dep := dep.(type) {
+		case string:
+			if info == nil {
+				fini = false
+				break
+			}
+			dep = filepath.FromSlash(dep)
+			fileInfo, err := os.Stat(dep)
+			if err != nil {
+				panic(err)
+			}
+
+			if fileInfo.ModTime().After(info.ModTime()) {
+				fini = false
+			}
+		case func() bool:
+			name := runtime.FuncForPC(reflect.ValueOf(dep).Pointer()).Name()
+			if result, ok := processed[name]; ok {
+				fini = fini && result
+				fmt.Printf("%s is done\n", name)
+				break
+			}
+			result := dep()
+			fini = fini && result
+			fmt.Printf("%s\n", name)
+			processed[name] = result
+		}
+	}
+
+	return fini
+}
+
+func chdir(dir string) string {
+	dir = filepath.FromSlash(dir)
+	working, err := os.Getwd()
+	if err != nil {
+		panic(err)
+	}
+	err = os.Chdir(dir)
+	if err != nil {
+		panic(err)
+	}
+	fmt.Printf("cd %s\n", dir)
+	return working
+}
+
+func command(name, inputFile, outputFile string, arg ...string) {
+	name = filepath.FromSlash(name)
+	inputFile = filepath.FromSlash(inputFile)
+	outputFile = filepath.FromSlash(outputFile)
+	fmt.Print(name)
+	for _, a := range arg {
+		fmt.Printf(" %s", a)
+	}
+
+	cmd := exec.Command(name, arg...)
+
+	if inputFile != "" {
+		fmt.Printf(" < %s", inputFile)
+		input, err := ioutil.ReadFile(inputFile)
+		if err != nil {
+			panic(err)
+		}
+		writer, err := cmd.StdinPipe()
+		if err != nil {
+			panic(err)
+		}
+		go func() {
+			defer writer.Close()
+			_, err := writer.Write([]byte(input))
+			if err != nil {
+				panic(err)
+			}
+		}()
+	}
+
+	if outputFile != "" {
+		fmt.Printf(" > %s\n", outputFile)
+		output, err := cmd.Output()
+		if err != nil {
+			panic(err)
+		}
+		err = ioutil.WriteFile(outputFile, output, 0600)
+		if err != nil {
+			panic(err)
+		}
+	} else {
+		output, err := cmd.CombinedOutput()
+		fmt.Printf("\n%s", string(output))
+		if err != nil {
+			panic(err)
+		}
+	}
+}
+
+func delete(file string) {
+	file = filepath.FromSlash(file)
+	fmt.Printf("rm -f %s\n", file)
+	os.Remove(file)
+}
+
+func deleteFilesWithSuffix(suffix string) {
+	files, err := ioutil.ReadDir(".")
+	if err != nil {
+		panic(err)
+	}
+	for _, file := range files {
+		if strings.HasSuffix(file.Name(), suffix) {
+			delete(file.Name())
+		}
+	}
+}
+
+func bootstrap() bool {
+	if done("bootstrap/bootstrap", "bootstrap/main.go", "tree/peg.go") {
+		return true
+	}
+
+	wd := chdir("bootstrap")
+	defer chdir(wd)
+
+	command("go", "", "", "build")
+
+	return false
+}
+
+func peg0() bool {
+	if done("cmd/peg-bootstrap/peg0", "cmd/peg-bootstrap/main.go", bootstrap) {
+		return true
+	}
+
+	wd := chdir("cmd/peg-bootstrap/")
+	defer chdir(wd)
+
+	deleteFilesWithSuffix(".peg.go")
+	command("../../bootstrap/bootstrap", "", "")
+	command("go", "", "", "build", "-tags", "bootstrap", "-o", "peg0")
+
+	return false
+}
+
+func peg1() bool {
+	if done("cmd/peg-bootstrap/peg1", peg0, "cmd/peg-bootstrap/bootstrap.peg") {
+		return true
+	}
+
+	wd := chdir("cmd/peg-bootstrap/")
+	defer chdir(wd)
+
+	deleteFilesWithSuffix(".peg.go")
+	command("./peg0", "bootstrap.peg", "peg1.peg.go")
+	command("go", "", "", "build", "-tags", "bootstrap", "-o", "peg1")
+
+	return false
+}
+
+func peg2() bool {
+	if done("cmd/peg-bootstrap/peg2", peg1, "cmd/peg-bootstrap/peg.bootstrap.peg") {
+		return true
+	}
+
+	wd := chdir("cmd/peg-bootstrap/")
+	defer chdir(wd)
+
+	deleteFilesWithSuffix(".peg.go")
+	command("./peg1", "peg.bootstrap.peg", "peg2.peg.go")
+	command("go", "", "", "build", "-tags", "bootstrap", "-o", "peg2")
+
+	return false
+}
+
+func peg3() bool {
+	if done("cmd/peg-bootstrap/peg3", peg2, "peg.peg") {
+		return true
+	}
+
+	wd := chdir("cmd/peg-bootstrap/")
+	defer chdir(wd)
+
+	deleteFilesWithSuffix(".peg.go")
+	command("./peg2", "../../peg.peg", "peg3.peg.go")
+	command("go", "", "", "build", "-tags", "bootstrap", "-o", "peg3")
+
+	return false
+}
+
+func peg_bootstrap() bool {
+	if done("cmd/peg-bootstrap/peg-bootstrap", peg3) {
+		return true
+	}
+
+	wd := chdir("cmd/peg-bootstrap/")
+	defer chdir(wd)
+
+	deleteFilesWithSuffix(".peg.go")
+	command("./peg3", "../../peg.peg", "peg-bootstrap.peg.go")
+	command("go", "", "", "build", "-tags", "bootstrap", "-o", "peg-bootstrap")
+
+	return false
+}
+
+func peg_peg_go() bool {
+	if done("peg.peg.go", peg_bootstrap) {
+		return true
+	}
+
+	command("cmd/peg-bootstrap/peg-bootstrap", "peg.peg", "peg.peg.go")
+	command("go", "", "", "build")
+	command("./peg", "", "", "-inline", "-switch", "peg.peg")
+
+	return false
+}
+
+func peg() bool {
+	if done("peg", peg_peg_go, "main.go") {
+		return true
+	}
+
+	command("go", "", "", "build")
+
+	return false
+}
+
+func clean() bool {
+	delete("bootstrap/bootstrap")
+
+	delete("grammars/c/c.peg.go")
+	delete("grammars/calculator/calculator.peg.go")
+	delete("grammars/fexl/fexl.peg.go")
+	delete("grammars/java/java_1_7.peg.go")
+	delete("grammars/long_test/long.peg.go")
+
+	wd := chdir("cmd/peg-bootstrap/")
+	defer chdir(wd)
+
+	deleteFilesWithSuffix(".peg.go")
+	delete("peg0")
+	delete("peg1")
+	delete("peg2")
+	delete("peg3")
+	delete("peg-bootstrap")
+
+	return false
+}
+
+func grammars_c() bool {
+	if done("grammars/c/c.peg.go", peg, "grammars/c/c.peg") {
+		return true
+	}
+
+	wd := chdir("grammars/c/")
+	defer chdir(wd)
+
+	command("../../peg", "", "", "-switch", "-inline", "c.peg")
+
+	return false
+}
+
+func grammars_calculator() bool {
+	if done("grammars/calculator/calculator.peg.go", peg, "grammars/calculator/calculator.peg") {
+		return true
+	}
+
+	wd := chdir("grammars/calculator/")
+	defer chdir(wd)
+
+	command("../../peg", "", "", "-switch", "-inline", "calculator.peg")
+
+	return false
+}
+
+func grammars_calculator_ast() bool {
+	if done("grammars/calculator_ast/calculator.peg.go", peg, "grammars/calculator_ast/calculator.peg") {
+		return true
+	}
+
+	wd := chdir("grammars/calculator_ast/")
+	defer chdir(wd)
+
+	command("../../peg", "", "", "-switch", "-inline", "calculator.peg")
+
+	return false
+}
+
+func grammars_fexl() bool {
+	if done("grammars/fexl/fexl.peg.go", peg, "grammars/fexl/fexl.peg") {
+		return true
+	}
+
+	wd := chdir("grammars/fexl/")
+	defer chdir(wd)
+
+	command("../../peg", "", "", "-switch", "-inline", "fexl.peg")
+
+	return false
+}
+
+func grammars_java() bool {
+	if done("grammars/java/java_1_7.peg.go", peg, "grammars/java/java_1_7.peg") {
+		return true
+	}
+
+	wd := chdir("grammars/java/")
+	defer chdir(wd)
+
+	command("../../peg", "", "", "-switch", "-inline", "java_1_7.peg")
+
+	return false
+}
+
+func grammars_long_test() bool {
+	if done("grammars/long_test/long.peg.go", peg, "grammars/long_test/long.peg") {
+		return true
+	}
+
+	wd := chdir("grammars/long_test/")
+	defer chdir(wd)
+
+	command("../../peg", "", "", "-switch", "-inline", "long.peg")
+
+	return false
+}
+
+func test() bool {
+	if done("", grammars_c, grammars_calculator, grammars_calculator_ast,
+		grammars_fexl, grammars_java, grammars_long_test) {
+		return true
+	}
+
+	command("go", "", "", "test", "-short", "-tags", "grammars", "./...")
+
+	return false
+}
+
+func bench() bool {
+	peg()
+
+	command("go", "", "", "test", "-benchmem", "-bench", ".")
+
+	return false
+}
diff --git a/buildinfo.go b/buildinfo.go
new file mode 100644
index 0000000..3fc87cc
--- /dev/null
+++ b/buildinfo.go
@@ -0,0 +1,13 @@
+// Code Generated by "build.go buildinfo"  DO NOT EDIT.
+package main
+
+const (
+	// VERSION is the version of peg
+	VERSION   = "unknown"
+	// BUILDTIME is the build time of peg
+	BUILDTIME = "2020-08-26T03:40:14"
+	// COMMIT is the commit hash of peg
+	COMMIT    = "5cdb3adc061370cdd20392ffe2740cc8db104126"
+	// IS_TAGGED is there a version
+	IS_TAGGED = false
+)
diff --git a/cmd/peg-bootstrap/bootstrap.peg b/cmd/peg-bootstrap/bootstrap.peg
new file mode 100644
index 0000000..9eb42b4
--- /dev/null
+++ b/cmd/peg-bootstrap/bootstrap.peg
@@ -0,0 +1,46 @@
+# Core bootstrap PE Grammar for peg language.
+# Adapted from peg.peg.
+
+Grammar		<- Spacing	{ p.AddPackage("main") }
+				{ p.AddImport("github.com/pointlander/peg/tree") }
+				{ p.AddPeg("Peg");  p.AddState("*tree.Tree") }
+				Action* Definition* !.
+
+Definition	<- Identifier 			{ p.AddRule(text) }
+			LeftArrow Expression 	{ p.AddExpression() }
+Expression	<- Sequence (Slash Sequence	{ p.AddAlternate() }	)*
+Sequence	<- Prefix (Prefix		{ p.AddSequence() }	)*
+Prefix		<- '!' Suffix { p.AddPeekNot() } / Suffix
+Suffix		<- Primary 	(Question	{ p.AddQuery() }
+				/ Star		{ p.AddStar() }		)?
+Primary		<- Identifier !LeftArrow	{ p.AddName(text) }
+		 / Open Expression Close
+		 / Literal / Class / Dot	{ p.AddDot() }
+		 / Action			{ p.AddAction(text) }
+		 / Begin Expression End		{ p.AddPush() }
+
+Identifier	<- < Ident Ident* > Spacing
+Ident		<- [A-Za-z]
+Literal		<- ['] !['] Char (!['] Char	{ p.AddSequence() } )* 	['] Spacing
+Class		<- '[' Range (!']' Range	{ p.AddAlternate() } )* ']' Spacing
+Range		<- Char '-' Char { p.AddRange() } / Char
+Char		<- '\\0x' <[0-9a-f]*>		{ p.AddHexaCharacter(text) }
+		 / '\\\\'			{ p.AddCharacter("\\") }
+		 / !'\\' <.>			{ p.AddCharacter(text) }
+
+LeftArrow	<- '<-' Spacing
+Slash		<- '/' Spacing
+Question	<- '?' Spacing
+Star		<- '*' Spacing
+Open		<- '(' Spacing
+Close		<- ')' Spacing
+Dot		<- '.' Spacing
+
+Spacing		<- (Space / Comment)*
+Comment		<- '#' (!EndOfLine .)*
+Space		<- ' ' / '\0x9' / EndOfLine
+EndOfLine	<- '\0xd\0xa' / '\0xa' / '\0xd'
+
+Action		<- '{' < (![}].)* > '}' Spacing
+Begin		<- '<' Spacing
+End		<- '>' Spacing
diff --git a/cmd/peg-bootstrap/main.go b/cmd/peg-bootstrap/main.go
new file mode 100644
index 0000000..389c93b
--- /dev/null
+++ b/cmd/peg-bootstrap/main.go
@@ -0,0 +1,29 @@
+// Copyright 2010 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// +build bootstrap
+
+package main
+
+import (
+	"io/ioutil"
+	"log"
+	"os"
+
+	"github.com/pointlander/peg/tree"
+)
+
+func main() {
+	buffer, err := ioutil.ReadAll(os.Stdin)
+	if err != nil {
+		log.Fatal(err)
+	}
+	p := &Peg{Tree: tree.New(false, false, false), Buffer: string(buffer)}
+	p.Init(Pretty(true), Size(1<<15))
+	if err := p.Parse(); err != nil {
+		log.Fatal(err)
+	}
+	p.Execute()
+	p.Compile("boot.peg.go", os.Args, os.Stdout)
+}
diff --git a/cmd/peg-bootstrap/peg.bootstrap.peg b/cmd/peg-bootstrap/peg.bootstrap.peg
new file mode 100644
index 0000000..8b0fd0b
--- /dev/null
+++ b/cmd/peg-bootstrap/peg.bootstrap.peg
@@ -0,0 +1,106 @@
+# PE Grammar for bootstrap peg language
+#
+# Adapted from peg.peg.
+
+# Hierarchical syntax
+Grammar		<- Spacing 'package' MustSpacing Identifier      { p.AddPackage(text) }
+			   Import*
+                           'type' MustSpacing Identifier         { p.AddPeg(text) }
+                           'Peg' Spacing Action              { p.AddState(text) }
+                           Definition Definition* EndOfFile
+
+Import		<- 'import' Spacing ["] < ([a-zA-Z_/.]/'-')([a-zA-Z_/.]/'-')* > ["] Spacing { p.AddImport(text) }
+
+Definition	<- Identifier 			{ p.AddRule(text) }
+		     LeftArrow Expression 	{ p.AddExpression() }
+Expression	<- Sequence (Slash Sequence	{ p.AddAlternate() }
+			    )* (Slash           { p.AddNil(); p.AddAlternate() }
+                               )?
+                 /				{ p.AddNil() }
+Sequence	<- Prefix (Prefix		{ p.AddSequence() }
+			  )*
+Prefix		<- And Action			{ p.AddPredicate(text) }
+		 / Not Action			{ p.AddStateChange(text) }
+		 / And Suffix			{ p.AddPeekFor() }
+		 / Not Suffix			{ p.AddPeekNot() }
+		 /     Suffix
+Suffix          <- Primary (Question            { p.AddQuery() }
+                           / Star               { p.AddStar() }
+                           / Plus               { p.AddPlus() }
+                           )?
+Primary	        <- Identifier !LeftArrow        { p.AddName(text) }
+                 / Open Expression Close
+                 / Literal
+                 / Class
+                 / Dot                          { p.AddDot() }
+                 / Action                       { p.AddAction(text) }
+                 / Begin Expression End         { p.AddPush() }
+
+# Lexical syntax
+
+Identifier	<- < IdentStart IdentCont* > Spacing
+IdentStart	<- [A-Za-z_]
+IdentCont	<- IdentStart / [0-9]
+Literal		<- ['] (!['] Char)? (!['] Char                { p.AddSequence() }
+                                    )* ['] Spacing
+		 / ["] (!["] DoubleChar)? (!["] DoubleChar    { p.AddSequence() }
+                                          )* ["] Spacing
+Class		<- ( '[[' ( '^' DoubleRanges              { p.AddPeekNot(); p.AddDot(); p.AddSequence() }
+                          / DoubleRanges )?
+                     ']]'
+                   / '[' ( '^' Ranges                     { p.AddPeekNot(); p.AddDot(); p.AddSequence() }
+                         / Ranges )?
+                     ']' )
+                   Spacing
+Ranges		<- !']' Range (!']' Range  { p.AddAlternate() }
+                              )*
+DoubleRanges	<- !']]' DoubleRange (!']]' DoubleRange  { p.AddAlternate() }
+                                     )*
+Range		<- Char '-' Char              { p.AddRange() }
+                 / Char
+DoubleRange	<- Char '-' Char              { p.AddDoubleRange() }
+                 / DoubleChar
+Char            <- Escape
+                 / !'\\' <.>                  { p.AddCharacter(text) }
+DoubleChar	<- Escape
+		 / <[a-zA-Z]>                 { p.AddDoubleCharacter(text) }
+                 / !'\\' <.>                  { p.AddCharacter(text) }
+Escape          <- '\\' [aA]                  { p.AddCharacter("\a") }   # bell
+                 / '\\' [bB]                  { p.AddCharacter("\b") }   # bs
+                 / '\\' [eE]                  { p.AddCharacter("\x1B") } # esc
+                 / '\\' [fF]                  { p.AddCharacter("\f") }   # ff
+                 / '\\' [nN]                  { p.AddCharacter("\n") }   # nl
+                 / '\\' [rR]                  { p.AddCharacter("\r") }   # cr
+                 / '\\' [tT]                  { p.AddCharacter("\t") }   # ht
+                 / '\\' [vV]                  { p.AddCharacter("\v") }   # vt
+                 / '\\' [']		      { p.AddCharacter("'") }
+                 / '\\"'		      { p.AddCharacter("\"") }
+                 / '\\['                      { p.AddCharacter("[") }
+                 / '\\]'                      { p.AddCharacter("]") }
+                 / '\\-'                      { p.AddCharacter("-") }
+                 / '\\' '0'[xX] <[0-9a-fA-F][0-9a-fA-F]*>     { p.AddHexaCharacter(text) }
+                 / '\\' <[0-3][0-7][0-7]>     { p.AddOctalCharacter(text) }
+                 / '\\' <[0-7][0-7]?>         { p.AddOctalCharacter(text) }
+                 / '\\\\'                     { p.AddCharacter("\\") }
+LeftArrow	<- ('<-' / '\0x2190') Spacing
+Slash		<- '/' Spacing
+And		<- '&' Spacing
+Not		<- '!' Spacing
+Question	<- '?' Spacing
+Star		<- '*' Spacing
+Plus		<- '+' Spacing
+Open		<- '(' Spacing
+Close		<- ')' Spacing
+Dot		<- '.' Spacing
+SpaceComment	<- (Space / Comment)
+Spacing		<- SpaceComment*
+MustSpacing	<- SpaceComment Spacing
+Comment		<- '#' (!EndOfLine .)* EndOfLine
+Space		<- ' ' / '\0x9' / EndOfLine
+EndOfLine	<- '\0xd\0xa' / '\0xa' / '\0xd'
+EndOfFile	<- !.
+Action		<- '{' < ActionBody* > '}' Spacing
+ActionBody	<- ![{}]. / '{' ActionBody* '}'
+Begin		<- '<' Spacing
+End		<- '>' Spacing
+
diff --git a/debian/changelog b/debian/changelog
index 7cbb97e..1b614a9 100644
--- a/debian/changelog
+++ b/debian/changelog
@@ -1,3 +1,9 @@
+golang-github-pointlander-peg (1.0.1-1) UNRELEASED; urgency=low
+
+  * New upstream release.
+
+ -- Debian Janitor <janitor@jelmer.uk>  Mon, 14 Mar 2022 02:59:52 -0000
+
 golang-github-pointlander-peg (1.0.0-5) unstable; urgency=medium
 
   * Team upload.
diff --git a/go.mod b/go.mod
new file mode 100644
index 0000000..7b394f3
--- /dev/null
+++ b/go.mod
@@ -0,0 +1,5 @@
+module github.com/pointlander/peg
+
+require github.com/pointlander/jetset v1.0.1-0.20190518214125-eee7eff80bd4
+
+go 1.13
diff --git a/go.sum b/go.sum
new file mode 100644
index 0000000..66b426d
--- /dev/null
+++ b/go.sum
@@ -0,0 +1,8 @@
+github.com/pointlander/compress v1.1.0 h1:5fUcQV2qEHvk0OpILH6eltwluN5VnwiYrkc1wjGUHnU=
+github.com/pointlander/compress v1.1.0/go.mod h1:q5NXNGzqj5uPnVuhGkZfmgHqNUhf15VLi6L9kW0VEc0=
+github.com/pointlander/compress v1.1.1-0.20190518213731-ff44bd196cc3 h1:hUmXhbljNFtrH5hzV9kiRoddZ5nfPTq3K0Sb2hYYiqE=
+github.com/pointlander/compress v1.1.1-0.20190518213731-ff44bd196cc3/go.mod h1:q5NXNGzqj5uPnVuhGkZfmgHqNUhf15VLi6L9kW0VEc0=
+github.com/pointlander/jetset v1.0.0 h1:bNlaNAX7cDPID9SlcogmXlDWq0KcRJSpKwHXaAM3bGQ=
+github.com/pointlander/jetset v1.0.0/go.mod h1:zY6+WHRPB10uzTajloHtybSicLW1bf6Rz0eSaU9Deng=
+github.com/pointlander/jetset v1.0.1-0.20190518214125-eee7eff80bd4 h1:RHHRCZeaNyBXdYPMjZNH8/XHDBH38TZzw8izrW7dmBE=
+github.com/pointlander/jetset v1.0.1-0.20190518214125-eee7eff80bd4/go.mod h1:RdR1j20Aj5pB6+fw6Y9Ur7lMHpegTEjY1vc19hEZL40=
diff --git a/grammars/c/Makefile b/grammars/c/Makefile
deleted file mode 100644
index d82a4e2..0000000
--- a/grammars/c/Makefile
+++ /dev/null
@@ -1,12 +0,0 @@
-# Copyright 2010 The Go Authors. All rights reserved.
-# Use of this source code is governed by a BSD-style
-# license that can be found in the LICENSE file.
-
-c: c.peg.go main.go
-	go build
-
-c.peg.go: c.peg
-	../../peg -switch -inline c.peg
-
-clean:
-	rm -f c c.peg.go
diff --git a/grammars/c/c.peg b/grammars/c/c.peg
index d45d1a4..47a6011 100644
--- a/grammars/c/c.peg
+++ b/grammars/c/c.peg
@@ -110,7 +110,7 @@ type C Peg {
 
 }
 
-TranslationUnit <- Spacing ExternalDeclaration+ EOT
+TranslationUnit <- Spacing ( ExternalDeclaration / SEMI ) * EOT
 
 ExternalDeclaration <- FunctionDefinition / Declaration
 
@@ -171,13 +171,13 @@ TypeSpecifier
 
 StructOrUnionSpecifier
    <- StructOrUnion
-      ( Identifier? LWING StructDeclaration+ RWING
+      ( Identifier? LWING StructDeclaration* RWING
       / Identifier
       )
 
 StructOrUnion <- STRUCT / UNION
 
-StructDeclaration <- SpecifierQualifierList StructDeclaratorList SEMI
+StructDeclaration <- ( SpecifierQualifierList StructDeclaratorList? )? SEMI
 
 SpecifierQualifierList
    <- ( TypeQualifier*
@@ -313,9 +313,9 @@ JumpStatement
 #-------------------------------------------------------------------------
 
 PrimaryExpression
-   <- Identifier
+   <- StringLiteral
     / Constant
-    / StringLiteral
+    / Identifier
     / LPAR Expression RPAR
 
 PostfixExpression
@@ -347,7 +347,7 @@ UnaryOperator
     / TILDA
     / BANG
 
-CastExpression <- (LPAR TypeName RPAR)* UnaryExpression
+CastExpression <- (LPAR TypeName RPAR CastExpression) / UnaryExpression
 
 MultiplicativeExpression <- CastExpression ((STAR / DIV / MOD) CastExpression)*
 
@@ -608,7 +608,7 @@ Escape
     / HexEscape
     / UniversalCharacter
 
-SimpleEscape <- '\\' ['\"?\\abfnrtv]
+SimpleEscape <- '\\' ['\"?\\%abfnrtv]
 OctalEscape  <- '\\' [0-7][0-7]?[0-7]?
 HexEscape    <- '\\x' HexDigit+
 
diff --git a/grammars/c/c_test.go b/grammars/c/c_test.go
new file mode 100644
index 0000000..34ea4cc
--- /dev/null
+++ b/grammars/c/c_test.go
@@ -0,0 +1,208 @@
+// Copyright 2010 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// +build grammars
+
+package main
+
+import (
+	"fmt"
+	"io/ioutil"
+	"log"
+	"os"
+	"strings"
+	"testing"
+)
+
+func parseCBuffer(buffer string) (*C, error) {
+	clang := &C{Buffer: buffer}
+	clang.Init()
+	err := clang.Parse()
+	return clang, err
+}
+
+func parseC_4t(t *testing.T, src string) *C {
+	c, err := parseCBuffer(src)
+	if err != nil {
+		t.Fatal(err)
+	}
+	return c
+}
+
+func noParseC_4t(t *testing.T, src string) {
+	_, err := parseCBuffer(src)
+	if err == nil {
+		t.Fatal("Parsed what should not have parsed.")
+	}
+}
+
+func TestCParsing_Expressions1(t *testing.T) {
+	case1src :=
+		`int a() {
+		(es);
+		1++;
+		1+1;
+		a+1;
+		(a)+1;
+		a->x;
+		return 0;
+}`
+	parseC_4t(t, case1src)
+}
+
+func TestCParsing_Expressions2(t *testing.T) {
+	parseC_4t(t,
+		`int a() {
+	if (a) { return (a); }
+
+	return (0);
+	return a+b;
+	return (a+b);
+	return (a)+0;
+}`)
+
+	parseC_4t(t, `int a() { return (a)+0; }`)
+}
+
+func TestCParsing_Expressions3(t *testing.T) {
+	parseC_4t(t,
+		`int a() {
+1+(a);
+(a)++;
+(es)++;
+(es)||a;
+(es)->a;
+return (a)+(b);
+return 0+(a);
+}`)
+}
+
+func TestCParsing_Expressions4(t *testing.T) {
+	parseC_4t(t, `int a(){1+(a);}`)
+}
+func TestCParsing_Expressions5(t *testing.T) {
+	parseC_4t(t, `int a(){return (int)0;}`)
+}
+func TestCParsing_Expressions6(t *testing.T) {
+	parseC_4t(t, `int a(){return (in)0;}`)
+}
+func TestCParsing_Expressions7(t *testing.T) {
+	parseC_4t(t, `int a()
+{ return (0); }`)
+}
+func TestCParsing_Cast0(t *testing.T) {
+	parseC_4t(t, `int a(){(cast)0;}`)
+}
+func TestCParsing_Cast1(t *testing.T) {
+	parseC_4t(t, `int a(){(m*)(rsp);}`)
+	parseC_4t(t, `int a(){(struct m*)(rsp);}`)
+}
+
+func TestCParsing_Empty(t *testing.T) {
+	parseC_4t(t, `/** empty is valid. */  `)
+}
+func TestCParsing_EmptyStruct(t *testing.T) {
+	parseC_4t(t, `struct empty{};`)
+	parseC_4t(t, `struct {} empty;`)
+	parseC_4t(t, `struct empty {} empty;`)
+}
+func TestCParsing_EmptyEmbeddedUnion(t *testing.T) {
+	parseC_4t(t, `struct empty{
+	union {
+		int a;
+		char b;
+	};
+};`)
+}
+func TestCParsing_ExtraSEMI(t *testing.T) {
+	parseC_4t(t, `int func(){}
+;
+struct {} empty;
+struct {} empty;;
+int foo() {};
+int foo() {};;
+`)
+
+	noParseC_4t(t, `struct empty{}`)
+}
+func TestCParsing_ExtraSEMI2(t *testing.T) {
+	parseC_4t(t, `
+struct a { int b; ; };
+`)
+
+	noParseC_4t(t, `struct empty{}`)
+}
+
+func TestCParsing_Escapes(t *testing.T) {
+	parseC_4t(t, `
+int f() {
+	printf("%s", "\a\b\f\n\r\t\v");
+	printf("\\");
+	printf("\%");
+	printf("\"");
+	printf('\"'); // <- semantically wrong but syntactically valid.
+}`)
+}
+
+func TestCParsing_Long(t *testing.T) {
+	if testing.Short() {
+		t.Skip("skipping c parsing long test")
+	}
+
+	var walk func(name string)
+	walk = func(name string) {
+		fileInfo, err := os.Stat(name)
+		if err != nil {
+			log.Fatal(err)
+		}
+
+		if fileInfo.Mode()&(os.ModeNamedPipe|os.ModeSocket|os.ModeDevice) != 0 {
+			/* will lock up if opened */
+		} else if fileInfo.IsDir() {
+			fmt.Printf("directory %v\n", name)
+
+			file, err := os.Open(name)
+			if err != nil {
+				log.Fatal(err)
+			}
+
+			files, err := file.Readdir(-1)
+			if err != nil {
+				log.Fatal(err)
+			}
+			file.Close()
+
+			for _, f := range files {
+				if !strings.HasSuffix(name, "/") {
+					name += "/"
+				}
+				walk(name + f.Name())
+			}
+		} else if strings.HasSuffix(name, ".c") {
+			fmt.Printf("parse %v\n", name)
+
+			file, err := os.Open(name)
+			if err != nil {
+				log.Fatal(err)
+			}
+
+			buffer, err := ioutil.ReadAll(file)
+			if err != nil {
+				log.Fatal(err)
+			}
+			file.Close()
+
+			clang := &C{Buffer: string(buffer)}
+			clang.Init()
+			if err := clang.Parse(); err != nil {
+				log.Fatal(err)
+			}
+		}
+	}
+	walk("c/")
+}
+
+func TestCParsing_WideString(t *testing.T) {
+	parseC_4t(t, `wchar_t *msg = L"Hello";`);
+}
diff --git a/grammars/c/main.go b/grammars/c/main.go
deleted file mode 100644
index c7e1378..0000000
--- a/grammars/c/main.go
+++ /dev/null
@@ -1,72 +0,0 @@
-// Copyright 2010 The Go Authors. All rights reserved.
-// Use of this source code is governed by a BSD-style
-// license that can be found in the LICENSE file.
-
-package main
-
-import (
-	"fmt"
-	"io/ioutil"
-	"log"
-	"os"
-	"strings"
-)
-
-func main() {
-	if len(os.Args) < 2 {
-		fmt.Printf("%v FILE\n", os.Args[0])
-		os.Exit(1)
-	}
-
-	var walk func(name string)
-	walk = func(name string) {
-		fileInfo, err := os.Stat(name)
-		if err != nil {
-			log.Fatal(err)
-		}
-
-		if fileInfo.Mode() & (os.ModeNamedPipe | os.ModeSocket | os.ModeDevice) != 0 {
-			/* will lock up if opened */
-		} else if fileInfo.IsDir() {
-			fmt.Printf("directory %v\n", name)
-
-			file, err := os.Open(name)
-			if err != nil {
-				log.Fatal(err)
-			}
-
-			files, err := file.Readdir(-1)
-			if err != nil {
-				log.Fatal(err)
-			}
-			file.Close()
-
-			for _, f := range files {
-				if !strings.HasSuffix(name, "/") {
-					name += "/"
-				}
-				walk(name + f.Name())
-			}
-		} else if strings.HasSuffix(name, ".c") {
-			fmt.Printf("parse %v\n", name)
-
-			file, err := os.Open(name)
-			if err != nil {
-				log.Fatal(err)
-			}
-
-			buffer, err := ioutil.ReadAll(file)
-			if err != nil {
-				log.Fatal(err)
-			}
-			file.Close()
-
-			clang := &C{Buffer: string(buffer)}
-			clang.Init()
-			if err := clang.Parse(); err != nil {
-				log.Fatal(err)
-			}
-		}
-	}
-	walk(os.Args[1])
-}
diff --git a/grammars/calculator/Makefile b/grammars/calculator/Makefile
deleted file mode 100644
index e8dbb66..0000000
--- a/grammars/calculator/Makefile
+++ /dev/null
@@ -1,12 +0,0 @@
-# Copyright 2010 The Go Authors. All rights reserved.
-# Use of this source code is governed by a BSD-style
-# license that can be found in the LICENSE file.
-
-calculator: calculator.peg.go calculator.go main.go
-	go build
-
-calculator.peg.go: calculator.peg
-	../../peg -switch -inline calculator.peg
-
-clean:
-	rm -f calculator calculator.peg.go
diff --git a/grammars/calculator/calculator.go b/grammars/calculator/calculator.go
index 8969470..cd4f5d9 100644
--- a/grammars/calculator/calculator.go
+++ b/grammars/calculator/calculator.go
@@ -2,6 +2,8 @@
 // Use of this source code is governed by a BSD-style
 // license that can be found in the LICENSE file.
 
+// +build grammars
+
 package main
 
 import (
diff --git a/grammars/calculator/main.go b/grammars/calculator/calculator_test.go
similarity index 51%
rename from grammars/calculator/main.go
rename to grammars/calculator/calculator_test.go
index a9a3309..935cbcf 100644
--- a/grammars/calculator/main.go
+++ b/grammars/calculator/calculator_test.go
@@ -2,28 +2,25 @@
 // Use of this source code is governed by a BSD-style
 // license that can be found in the LICENSE file.
 
+// +build grammars
+
 package main
 
 import (
-	"fmt"
-	"log"
-	"os"
+	"math/big"
+	"testing"
 )
 
-func main() {
-	if len(os.Args) < 2 {
-		name := os.Args[0]
-		fmt.Printf("Usage: %v \"EXPRESSION\"\n", name)
-		fmt.Printf("Example: %v \"( 1 - -3 ) / 3 + 2 * ( 3 + -4 ) + 3 %% 2^2\"\n         =2\n", name)
-		os.Exit(1)
-	}
-	expression := os.Args[1]
+func TestCalculator(t *testing.T) {
+	expression := "( 1 - -3 ) / 3 + 2 * ( 3 + -4 ) + 3 % 2^2"
 	calc := &Calculator{Buffer: expression}
 	calc.Init()
 	calc.Expression.Init(expression)
 	if err := calc.Parse(); err != nil {
-		log.Fatal(err)
+		t.Fatal(err)
 	}
 	calc.Execute()
-	fmt.Printf("= %v\n", calc.Evaluate())
+	if calc.Evaluate().Cmp(big.NewInt(2)) != 0 {
+		t.Fatal("got incorrect result")
+	}
 }
diff --git a/grammars/calculator_ast/calculator.go b/grammars/calculator_ast/calculator.go
new file mode 100644
index 0000000..e3becd6
--- /dev/null
+++ b/grammars/calculator_ast/calculator.go
@@ -0,0 +1,137 @@
+// Copyright 2010 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// +build grammars
+
+package main
+
+import (
+	"math/big"
+)
+
+func (c *Calculator) Eval() *big.Int {
+	return c.Rulee(c.AST())
+}
+
+func (c *Calculator) Rulee(node *node32) *big.Int {
+	node = node.up
+	for node != nil {
+		switch node.pegRule {
+		case rulee1:
+			return c.Rulee1(node)
+		}
+		node = node.next
+	}
+	return nil
+}
+
+func (c *Calculator) Rulee1(node *node32) *big.Int {
+	node = node.up
+	var a *big.Int
+	for node != nil {
+		switch node.pegRule {
+		case rulee2:
+			a = c.Rulee2(node)
+		case ruleadd:
+			node = node.next
+			b := c.Rulee2(node)
+			a.Add(a, b)
+		case ruleminus:
+			node = node.next
+			b := c.Rulee2(node)
+			a.Sub(a, b)
+		}
+		node = node.next
+	}
+	return a
+}
+
+func (c *Calculator) Rulee2(node *node32) *big.Int {
+	node = node.up
+	var a *big.Int
+	for node != nil {
+		switch node.pegRule {
+		case rulee3:
+			a = c.Rulee3(node)
+		case rulemultiply:
+			node = node.next
+			b := c.Rulee3(node)
+			a.Mul(a, b)
+		case ruledivide:
+			node = node.next
+			b := c.Rulee3(node)
+			a.Div(a, b)
+		case rulemodulus:
+			node = node.next
+			b := c.Rulee3(node)
+			a.Mod(a, b)
+		}
+		node = node.next
+	}
+	return a
+}
+
+func (c *Calculator) Rulee3(node *node32) *big.Int {
+	node = node.up
+	var a *big.Int
+	for node != nil {
+		switch node.pegRule {
+		case rulee4:
+			a = c.Rulee4(node)
+		case ruleexponentiation:
+			node = node.next
+			b := c.Rulee4(node)
+			a.Exp(a, b, nil)
+		}
+		node = node.next
+	}
+	return a
+}
+
+func (c *Calculator) Rulee4(node *node32) *big.Int {
+	node = node.up
+	minus := false
+	for node != nil {
+		switch node.pegRule {
+		case rulevalue:
+			a := c.Rulevalue(node)
+			if minus {
+				a.Neg(a)
+			}
+			return a
+		case ruleminus:
+			minus = true
+		}
+		node = node.next
+	}
+	return nil
+}
+
+func (c *Calculator) Rulevalue(node *node32) *big.Int {
+	node = node.up
+	for node != nil {
+		switch node.pegRule {
+		case rulenumber:
+			a := big.NewInt(0)
+			a.SetString(string(c.buffer[node.begin:node.end]), 10)
+			return a
+		case rulesub:
+			return c.Rulesub(node)
+		}
+		node = node.next
+	}
+	return nil
+}
+
+func (c *Calculator) Rulesub(node *node32) *big.Int {
+	node = node.up
+	for node != nil {
+		switch node.pegRule {
+		case rulee1:
+			return c.Rulee1(node)
+		}
+		node = node.next
+	}
+	return nil
+}
diff --git a/grammars/calculator_ast/calculator.peg b/grammars/calculator_ast/calculator.peg
new file mode 100644
index 0000000..ac5f2aa
--- /dev/null
+++ b/grammars/calculator_ast/calculator.peg
@@ -0,0 +1,34 @@
+# Copyright 2010 The Go Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style
+# license that can be found in the LICENSE file.
+
+package main
+
+type Calculator Peg {
+}
+
+e <- sp e1 !.
+e1 <- e2 ( add e2
+         / minus e2
+         )*
+e2 <- e3 ( multiply e3
+         / divide e3
+         / modulus e3
+         )*
+e3 <- e4 ( exponentiation e4
+         )*
+e4 <- minus value
+    / value
+value <- number
+       / sub
+number <- < [0-9]+ > sp
+sub <- open e1 close
+add <- '+' sp
+minus <- '-' sp
+multiply <- '*' sp
+divide <- '/' sp
+modulus <- '%' sp
+exponentiation <- '^' sp
+open <- '(' sp
+close <- ')' sp
+sp <- ( ' ' / '\t' )*
diff --git a/grammars/calculator_ast/calculator_test.go b/grammars/calculator_ast/calculator_test.go
new file mode 100644
index 0000000..8679843
--- /dev/null
+++ b/grammars/calculator_ast/calculator_test.go
@@ -0,0 +1,24 @@
+// Copyright 2010 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// +build grammars
+
+package main
+
+import (
+	"math/big"
+	"testing"
+)
+
+func TestCalculator(t *testing.T) {
+	expression := "( 1 - -3 ) / 3 + 2 * ( 3 + -4 ) + 3 % 2^2"
+	calc := &Calculator{Buffer: expression}
+	calc.Init()
+	if err := calc.Parse(); err != nil {
+		t.Fatal(err)
+	}
+	if calc.Eval().Cmp(big.NewInt(2)) != 0 {
+		t.Fatal("got incorrect result")
+	}
+}
diff --git a/grammars/fexl/Makefile b/grammars/fexl/Makefile
deleted file mode 100644
index 168cffb..0000000
--- a/grammars/fexl/Makefile
+++ /dev/null
@@ -1,12 +0,0 @@
-# Copyright 2010 The Go Authors. All rights reserved.
-# Use of this source code is governed by a BSD-style
-# license that can be found in the LICENSE file.
-
-fexl: fexl.peg.go main.go
-	go build
-
-fexl.peg.go: fexl.peg
-	../../peg -switch -inline fexl.peg
-
-clean:
-	rm -f fexl fexl.peg.go
diff --git a/grammars/fexl/main.go b/grammars/fexl/fexl_test.go
similarity index 79%
rename from grammars/fexl/main.go
rename to grammars/fexl/fexl_test.go
index d1d522e..f1ff644 100644
--- a/grammars/fexl/main.go
+++ b/grammars/fexl/fexl_test.go
@@ -2,24 +2,25 @@
 // Use of this source code is governed by a BSD-style
 // license that can be found in the LICENSE file.
 
+// +build grammars
+
 package main
 
 import (
-	"log"
 	"io/ioutil"
+	"testing"
 )
 
-func main() {
+func TestFexl(t *testing.T) {
 	buffer, err := ioutil.ReadFile("doc/try.fxl")
 	if err != nil {
-		log.Fatal(err)
+		t.Fatal(err)
 	}
 
 	fexl := &Fexl{Buffer: string(buffer)}
 	fexl.Init()
 
 	if err := fexl.Parse(); err != nil {
-		log.Fatal(err)
+		t.Fatal(err)
 	}
-	fexl.Highlighter()
 }
diff --git a/grammars/java/Makefile b/grammars/java/Makefile
deleted file mode 100644
index 23a6353..0000000
--- a/grammars/java/Makefile
+++ /dev/null
@@ -1,12 +0,0 @@
-# Copyright 2010 The Go Authors. All rights reserved.
-# Use of this source code is governed by a BSD-style
-# license that can be found in the LICENSE file.
-
-java: java_1_7.peg.go main.go
-	go build
-
-java_1_7.peg.go: java_1_7.peg
-	../../peg -switch -inline java_1_7.peg
-
-clean:
-	rm -f java java_1_7.peg.go
diff --git a/grammars/java/java_1_7.peg b/grammars/java/java_1_7.peg
index 32abdc1..bfe66b9 100644
--- a/grammars/java/java_1_7.peg
+++ b/grammars/java/java_1_7.peg
@@ -178,7 +178,7 @@ InterfaceMethodOrFieldRest
     / InterfaceMethodDeclaratorRest
 
 InterfaceMethodDeclaratorRest
-    <- FormalParameters Dim* (THROWS ClassTypeList)? SEM
+    <- FormalParameters Dim* (THROWS ClassTypeList)? SEMI
 
 InterfaceGenericMethodDecl
     <- TypeParameters (Type / VOID) Identifier InterfaceMethodDeclaratorRest
diff --git a/grammars/java/main.go b/grammars/java/java_test.go
similarity index 70%
rename from grammars/java/main.go
rename to grammars/java/java_test.go
index 46f15fa..89e34ed 100644
--- a/grammars/java/main.go
+++ b/grammars/java/java_test.go
@@ -2,6 +2,8 @@
 // Use of this source code is governed by a BSD-style
 // license that can be found in the LICENSE file.
 
+// +build grammars
+
 package main
 
 import (
@@ -10,12 +12,28 @@ import (
 	"log"
 	"os"
 	"strings"
+	"testing"
 )
 
-func main() {
-	if len(os.Args) < 2 {
-		fmt.Printf("%v FILE\n", os.Args[0])
-		os.Exit(1)
+var example1 = `public class HelloWorld {
+	public static void main(String[] args) {
+		System.out.println("Hello, World");
+	}
+}
+`
+
+func TestBasic(t *testing.T) {
+	java := &Java{Buffer: example1}
+	java.Init()
+
+	if err := java.Parse(); err != nil {
+		t.Fatal(err)
+	}
+}
+
+func TestJava(t *testing.T) {
+	if testing.Short() {
+		t.Skip("skipping java parsing long test")
 	}
 
 	var walk func(name string)
@@ -25,7 +43,7 @@ func main() {
 			log.Fatal(err)
 		}
 
-		if fileInfo.Mode() & (os.ModeNamedPipe | os.ModeSocket | os.ModeDevice) != 0 {
+		if fileInfo.Mode()&(os.ModeNamedPipe|os.ModeSocket|os.ModeDevice) != 0 {
 			/* will lock up if opened */
 		} else if fileInfo.IsDir() {
 			fmt.Printf("directory %v\n", name)
@@ -68,5 +86,5 @@ func main() {
 			}
 		}
 	}
-	walk(os.Args[1])
+	walk("java/")
 }
diff --git a/grammars/long_test/Makefile b/grammars/long_test/Makefile
deleted file mode 100644
index 0834fb6..0000000
--- a/grammars/long_test/Makefile
+++ /dev/null
@@ -1,12 +0,0 @@
-# Copyright 2010 The Go Authors. All rights reserved.
-# Use of this source code is governed by a BSD-style
-# license that can be found in the LICENSE file.
-
-long_test: long.peg.go main.go
-	go build
-
-long.peg.go: long.peg
-	peg -switch -inline long.peg
-
-clean:
-	rm -f long_test long.peg.go
diff --git a/grammars/long_test/main.go b/grammars/long_test/long_test.go
similarity index 70%
rename from grammars/long_test/main.go
rename to grammars/long_test/long_test.go
index 1cc9f75..60530b6 100644
--- a/grammars/long_test/main.go
+++ b/grammars/long_test/long_test.go
@@ -2,21 +2,26 @@
 // Use of this source code is governed by a BSD-style
 // license that can be found in the LICENSE file.
 
+// +build grammars
+
 package main
 
 import (
-	"fmt"
-	"log"
+	"testing"
 )
 
-func main() {
+func TestLong(t *testing.T) {
+	length := 100000
+	if testing.Short() {
+		length = 100
+	}
+
 	expression := ""
 	long := &Long{Buffer: "\"" + expression + "\""}
 	long.Init()
-	for c := 0; c < 100000; c++ {
+	for c := 0; c < length; c++ {
 		if err := long.Parse(); err != nil {
-			fmt.Printf("%v\n", c)
-			log.Fatal(err)
+			t.Fatal(err)
 		}
 		long.Reset()
 		expression = expression + "X"
diff --git a/main.go b/main.go
index 4b76615..cfbf5e8 100644
--- a/main.go
+++ b/main.go
@@ -11,23 +11,42 @@ import (
 	"log"
 	"os"
 	"runtime"
-	"time"
+
+	"github.com/pointlander/peg/tree"
 )
 
+//go:generate -command build go run build.go
+//go:generate build buildinfo
+//go:generate build peg
+
 var (
-	inline    = flag.Bool("inline", false, "parse rule inlining")
-	_switch   = flag.Bool("switch", false, "replace if-else if-else like blocks with switch blocks")
-	syntax    = flag.Bool("syntax", false, "print out the syntax tree")
-	highlight = flag.Bool("highlight", false, "test the syntax highlighter")
-	ast       = flag.Bool("ast", false, "generate an AST")
-	test      = flag.Bool("test", false, "test the PEG parser performance")
-	print     = flag.Bool("print", false, "directly dump the syntax tree")
+	inline        = flag.Bool("inline", false, "parse rule inlining")
+	_switch       = flag.Bool("switch", false, "replace if-else if-else like blocks with switch blocks")
+	print         = flag.Bool("print", false, "directly dump the syntax tree")
+	syntax        = flag.Bool("syntax", false, "print out the syntax tree")
+	noast         = flag.Bool("noast", false, "disable AST")
+	strict        = flag.Bool("strict", false, "treat compiler warnings as errors")
+	filename      = flag.String("output", "", "specify name of output file")
+	showVersion   = flag.Bool("version", false, "print the version and exit")
+	showBuildTime = flag.Bool("time", false, "show the last time `build.go buildinfo` was ran")
 )
 
 func main() {
 	runtime.GOMAXPROCS(2)
 	flag.Parse()
 
+	if *showVersion {
+		if IS_TAGGED {
+			fmt.Println("version:", VERSION)
+		} else {
+			fmt.Printf("version: %s-%s\n", VERSION, COMMIT)
+		}
+		if *showBuildTime {
+			fmt.Println("time:", BUILDTIME)
+		}
+		return
+	}
+
 	if flag.NArg() != 1 {
 		flag.Usage()
 		log.Fatalf("FILE: the peg file to compile")
@@ -39,46 +58,33 @@ func main() {
 		log.Fatal(err)
 	}
 
-	if *test {
-		iterations, p := 1000, &Peg{Tree: New(*inline, *_switch), Buffer: string(buffer)}
-		p.Init()
-		start := time.Now()
-		for i := 0; i < iterations; i++ {
-			p.Parse()
-			p.Reset()
-		}
-		total := float64(time.Since(start).Nanoseconds()) / float64(1000)
-		fmt.Printf("time: %v us\n", total/float64(iterations))
-		return
-	}
-
-	p := &Peg{Tree: New(*inline, *_switch), Buffer: string(buffer), Pretty: true}
-	p.Init()
+	p := &Peg{Tree: tree.New(*inline, *_switch, *noast), Buffer: string(buffer)}
+	p.Init(Pretty(true), Size(1<<15))
 	if err := p.Parse(); err != nil {
 		log.Fatal(err)
 	}
 
 	p.Execute()
 
-	if *ast {
-		p.AST().Print(p.Buffer)
-	}
 	if *print {
 		p.Print()
 	}
 	if *syntax {
 		p.PrintSyntaxTree()
 	}
-	if *highlight {
-		p.Highlighter()
-	}
 
-	filename := file + ".go"
-	out, error := os.OpenFile(filename, os.O_RDWR|os.O_CREATE|os.O_TRUNC, 0644)
-	if error != nil {
-		fmt.Printf("%v: %v\n", filename, error)
+	if *filename == "" {
+		*filename = file + ".go"
+	}
+	out, err := os.OpenFile(*filename, os.O_RDWR|os.O_CREATE|os.O_TRUNC, 0644)
+	if err != nil {
+		fmt.Printf("%v: %v\n", *filename, err)
 		return
 	}
 	defer out.Close()
-	p.Compile(filename, out)
+
+	p.Strict = *strict
+	if err = p.Compile(*filename, os.Args, out); err != nil {
+		log.Fatal(err)
+	}
 }
diff --git a/peg.peg b/peg.peg
index 031765e..36b0a7e 100644
--- a/peg.peg
+++ b/peg.peg
@@ -10,10 +10,12 @@
 
 package main
 
+import "github.com/pointlander/peg/tree"
+
 # parser declaration
 
 type Peg Peg {
- *Tree
+ *tree.Tree
 }
 
 # Hierarchical syntax
@@ -23,7 +25,11 @@ Grammar		<- Spacing 'package' MustSpacing Identifier      { p.AddPackage(text) }
                            'Peg' Spacing Action              { p.AddState(text) }
                            Definition+ EndOfFile
 
-Import		<- 'import' Spacing ["] < [a-zA-Z_/.\-]+ > ["] Spacing { p.AddImport(text) }
+Import		<- 'import' Spacing (MultiImport / SingleImport) Spacing
+SingleImport	<- ImportName 
+MultiImport	<- '(' Spacing (ImportName '\n' Spacing)* Spacing ')' 
+
+ImportName	<- ["] < [0-9a-zA-Z_/.\-]+ > ["]	{ p.AddImport(text) }
 
 Definition	<- Identifier 			{ p.AddRule(text) }
 		     LeftArrow Expression 	{ p.AddExpression() } &(Identifier LeftArrow / !.)
@@ -110,7 +116,7 @@ Dot		<- '.' Spacing
 SpaceComment	<- (Space / Comment)
 Spacing		<- SpaceComment*
 MustSpacing	<- SpaceComment+
-Comment		<- '#' (!EndOfLine .)* EndOfLine
+Comment		<- ('#' / '//') (!EndOfLine .)* EndOfLine
 Space		<- ' ' / '\t' / EndOfLine
 EndOfLine	<- '\r\n' / '\n' / '\r'
 EndOfFile	<- !.
diff --git a/peg.peg.go b/peg.peg.go
new file mode 100644
index 0000000..ca55fd0
--- /dev/null
+++ b/peg.peg.go
@@ -0,0 +1,2862 @@
+package main
+
+// Code generated by ./peg -inline -switch peg.peg DO NOT EDIT.
+
+import (
+	"fmt"
+	"github.com/pointlander/peg/tree"
+	"io"
+	"os"
+	"sort"
+	"strconv"
+	"strings"
+)
+
+const endSymbol rune = 1114112
+
+/* The rule types inferred from the grammar are below. */
+type pegRule uint8
+
+const (
+	ruleUnknown pegRule = iota
+	ruleGrammar
+	ruleImport
+	ruleSingleImport
+	ruleMultiImport
+	ruleImportName
+	ruleDefinition
+	ruleExpression
+	ruleSequence
+	rulePrefix
+	ruleSuffix
+	rulePrimary
+	ruleIdentifier
+	ruleIdentStart
+	ruleIdentCont
+	ruleLiteral
+	ruleClass
+	ruleRanges
+	ruleDoubleRanges
+	ruleRange
+	ruleDoubleRange
+	ruleChar
+	ruleDoubleChar
+	ruleEscape
+	ruleLeftArrow
+	ruleSlash
+	ruleAnd
+	ruleNot
+	ruleQuestion
+	ruleStar
+	rulePlus
+	ruleOpen
+	ruleClose
+	ruleDot
+	ruleSpaceComment
+	ruleSpacing
+	ruleMustSpacing
+	ruleComment
+	ruleSpace
+	ruleEndOfLine
+	ruleEndOfFile
+	ruleAction
+	ruleActionBody
+	ruleBegin
+	ruleEnd
+	ruleAction0
+	ruleAction1
+	ruleAction2
+	rulePegText
+	ruleAction3
+	ruleAction4
+	ruleAction5
+	ruleAction6
+	ruleAction7
+	ruleAction8
+	ruleAction9
+	ruleAction10
+	ruleAction11
+	ruleAction12
+	ruleAction13
+	ruleAction14
+	ruleAction15
+	ruleAction16
+	ruleAction17
+	ruleAction18
+	ruleAction19
+	ruleAction20
+	ruleAction21
+	ruleAction22
+	ruleAction23
+	ruleAction24
+	ruleAction25
+	ruleAction26
+	ruleAction27
+	ruleAction28
+	ruleAction29
+	ruleAction30
+	ruleAction31
+	ruleAction32
+	ruleAction33
+	ruleAction34
+	ruleAction35
+	ruleAction36
+	ruleAction37
+	ruleAction38
+	ruleAction39
+	ruleAction40
+	ruleAction41
+	ruleAction42
+	ruleAction43
+	ruleAction44
+	ruleAction45
+	ruleAction46
+	ruleAction47
+	ruleAction48
+)
+
+var rul3s = [...]string{
+	"Unknown",
+	"Grammar",
+	"Import",
+	"SingleImport",
+	"MultiImport",
+	"ImportName",
+	"Definition",
+	"Expression",
+	"Sequence",
+	"Prefix",
+	"Suffix",
+	"Primary",
+	"Identifier",
+	"IdentStart",
+	"IdentCont",
+	"Literal",
+	"Class",
+	"Ranges",
+	"DoubleRanges",
+	"Range",
+	"DoubleRange",
+	"Char",
+	"DoubleChar",
+	"Escape",
+	"LeftArrow",
+	"Slash",
+	"And",
+	"Not",
+	"Question",
+	"Star",
+	"Plus",
+	"Open",
+	"Close",
+	"Dot",
+	"SpaceComment",
+	"Spacing",
+	"MustSpacing",
+	"Comment",
+	"Space",
+	"EndOfLine",
+	"EndOfFile",
+	"Action",
+	"ActionBody",
+	"Begin",
+	"End",
+	"Action0",
+	"Action1",
+	"Action2",
+	"PegText",
+	"Action3",
+	"Action4",
+	"Action5",
+	"Action6",
+	"Action7",
+	"Action8",
+	"Action9",
+	"Action10",
+	"Action11",
+	"Action12",
+	"Action13",
+	"Action14",
+	"Action15",
+	"Action16",
+	"Action17",
+	"Action18",
+	"Action19",
+	"Action20",
+	"Action21",
+	"Action22",
+	"Action23",
+	"Action24",
+	"Action25",
+	"Action26",
+	"Action27",
+	"Action28",
+	"Action29",
+	"Action30",
+	"Action31",
+	"Action32",
+	"Action33",
+	"Action34",
+	"Action35",
+	"Action36",
+	"Action37",
+	"Action38",
+	"Action39",
+	"Action40",
+	"Action41",
+	"Action42",
+	"Action43",
+	"Action44",
+	"Action45",
+	"Action46",
+	"Action47",
+	"Action48",
+}
+
+type token32 struct {
+	pegRule
+	begin, end uint32
+}
+
+func (t *token32) String() string {
+	return fmt.Sprintf("\x1B[34m%v\x1B[m %v %v", rul3s[t.pegRule], t.begin, t.end)
+}
+
+type node32 struct {
+	token32
+	up, next *node32
+}
+
+func (node *node32) print(w io.Writer, pretty bool, buffer string) {
+	var print func(node *node32, depth int)
+	print = func(node *node32, depth int) {
+		for node != nil {
+			for c := 0; c < depth; c++ {
+				fmt.Fprintf(w, " ")
+			}
+			rule := rul3s[node.pegRule]
+			quote := strconv.Quote(string(([]rune(buffer)[node.begin:node.end])))
+			if !pretty {
+				fmt.Fprintf(w, "%v %v\n", rule, quote)
+			} else {
+				fmt.Fprintf(w, "\x1B[36m%v\x1B[m %v\n", rule, quote)
+			}
+			if node.up != nil {
+				print(node.up, depth+1)
+			}
+			node = node.next
+		}
+	}
+	print(node, 0)
+}
+
+func (node *node32) Print(w io.Writer, buffer string) {
+	node.print(w, false, buffer)
+}
+
+func (node *node32) PrettyPrint(w io.Writer, buffer string) {
+	node.print(w, true, buffer)
+}
+
+type tokens32 struct {
+	tree []token32
+}
+
+func (t *tokens32) Trim(length uint32) {
+	t.tree = t.tree[:length]
+}
+
+func (t *tokens32) Print() {
+	for _, token := range t.tree {
+		fmt.Println(token.String())
+	}
+}
+
+func (t *tokens32) AST() *node32 {
+	type element struct {
+		node *node32
+		down *element
+	}
+	tokens := t.Tokens()
+	var stack *element
+	for _, token := range tokens {
+		if token.begin == token.end {
+			continue
+		}
+		node := &node32{token32: token}
+		for stack != nil && stack.node.begin >= token.begin && stack.node.end <= token.end {
+			stack.node.next = node.up
+			node.up = stack.node
+			stack = stack.down
+		}
+		stack = &element{node: node, down: stack}
+	}
+	if stack != nil {
+		return stack.node
+	}
+	return nil
+}
+
+func (t *tokens32) PrintSyntaxTree(buffer string) {
+	t.AST().Print(os.Stdout, buffer)
+}
+
+func (t *tokens32) WriteSyntaxTree(w io.Writer, buffer string) {
+	t.AST().Print(w, buffer)
+}
+
+func (t *tokens32) PrettyPrintSyntaxTree(buffer string) {
+	t.AST().PrettyPrint(os.Stdout, buffer)
+}
+
+func (t *tokens32) Add(rule pegRule, begin, end, index uint32) {
+	tree, i := t.tree, int(index)
+	if i >= len(tree) {
+		t.tree = append(tree, token32{pegRule: rule, begin: begin, end: end})
+		return
+	}
+	tree[i] = token32{pegRule: rule, begin: begin, end: end}
+}
+
+func (t *tokens32) Tokens() []token32 {
+	return t.tree
+}
+
+type Peg struct {
+	*tree.Tree
+
+	Buffer string
+	buffer []rune
+	rules  [95]func() bool
+	parse  func(rule ...int) error
+	reset  func()
+	Pretty bool
+	tokens32
+}
+
+func (p *Peg) Parse(rule ...int) error {
+	return p.parse(rule...)
+}
+
+func (p *Peg) Reset() {
+	p.reset()
+}
+
+type textPosition struct {
+	line, symbol int
+}
+
+type textPositionMap map[int]textPosition
+
+func translatePositions(buffer []rune, positions []int) textPositionMap {
+	length, translations, j, line, symbol := len(positions), make(textPositionMap, len(positions)), 0, 1, 0
+	sort.Ints(positions)
+
+search:
+	for i, c := range buffer {
+		if c == '\n' {
+			line, symbol = line+1, 0
+		} else {
+			symbol++
+		}
+		if i == positions[j] {
+			translations[positions[j]] = textPosition{line, symbol}
+			for j++; j < length; j++ {
+				if i != positions[j] {
+					continue search
+				}
+			}
+			break search
+		}
+	}
+
+	return translations
+}
+
+type parseError struct {
+	p   *Peg
+	max token32
+}
+
+func (e *parseError) Error() string {
+	tokens, err := []token32{e.max}, "\n"
+	positions, p := make([]int, 2*len(tokens)), 0
+	for _, token := range tokens {
+		positions[p], p = int(token.begin), p+1
+		positions[p], p = int(token.end), p+1
+	}
+	translations := translatePositions(e.p.buffer, positions)
+	format := "parse error near %v (line %v symbol %v - line %v symbol %v):\n%v\n"
+	if e.p.Pretty {
+		format = "parse error near \x1B[34m%v\x1B[m (line %v symbol %v - line %v symbol %v):\n%v\n"
+	}
+	for _, token := range tokens {
+		begin, end := int(token.begin), int(token.end)
+		err += fmt.Sprintf(format,
+			rul3s[token.pegRule],
+			translations[begin].line, translations[begin].symbol,
+			translations[end].line, translations[end].symbol,
+			strconv.Quote(string(e.p.buffer[begin:end])))
+	}
+
+	return err
+}
+
+func (p *Peg) PrintSyntaxTree() {
+	if p.Pretty {
+		p.tokens32.PrettyPrintSyntaxTree(p.Buffer)
+	} else {
+		p.tokens32.PrintSyntaxTree(p.Buffer)
+	}
+}
+
+func (p *Peg) WriteSyntaxTree(w io.Writer) {
+	p.tokens32.WriteSyntaxTree(w, p.Buffer)
+}
+
+func (p *Peg) SprintSyntaxTree() string {
+	var bldr strings.Builder
+	p.WriteSyntaxTree(&bldr)
+	return bldr.String()
+}
+
+func (p *Peg) Execute() {
+	buffer, _buffer, text, begin, end := p.Buffer, p.buffer, "", 0, 0
+	for _, token := range p.Tokens() {
+		switch token.pegRule {
+
+		case rulePegText:
+			begin, end = int(token.begin), int(token.end)
+			text = string(_buffer[begin:end])
+
+		case ruleAction0:
+			p.AddPackage(text)
+		case ruleAction1:
+			p.AddPeg(text)
+		case ruleAction2:
+			p.AddState(text)
+		case ruleAction3:
+			p.AddImport(text)
+		case ruleAction4:
+			p.AddRule(text)
+		case ruleAction5:
+			p.AddExpression()
+		case ruleAction6:
+			p.AddAlternate()
+		case ruleAction7:
+			p.AddNil()
+			p.AddAlternate()
+		case ruleAction8:
+			p.AddNil()
+		case ruleAction9:
+			p.AddSequence()
+		case ruleAction10:
+			p.AddPredicate(text)
+		case ruleAction11:
+			p.AddStateChange(text)
+		case ruleAction12:
+			p.AddPeekFor()
+		case ruleAction13:
+			p.AddPeekNot()
+		case ruleAction14:
+			p.AddQuery()
+		case ruleAction15:
+			p.AddStar()
+		case ruleAction16:
+			p.AddPlus()
+		case ruleAction17:
+			p.AddName(text)
+		case ruleAction18:
+			p.AddDot()
+		case ruleAction19:
+			p.AddAction(text)
+		case ruleAction20:
+			p.AddPush()
+		case ruleAction21:
+			p.AddSequence()
+		case ruleAction22:
+			p.AddSequence()
+		case ruleAction23:
+			p.AddPeekNot()
+			p.AddDot()
+			p.AddSequence()
+		case ruleAction24:
+			p.AddPeekNot()
+			p.AddDot()
+			p.AddSequence()
+		case ruleAction25:
+			p.AddAlternate()
+		case ruleAction26:
+			p.AddAlternate()
+		case ruleAction27:
+			p.AddRange()
+		case ruleAction28:
+			p.AddDoubleRange()
+		case ruleAction29:
+			p.AddCharacter(text)
+		case ruleAction30:
+			p.AddDoubleCharacter(text)
+		case ruleAction31:
+			p.AddCharacter(text)
+		case ruleAction32:
+			p.AddCharacter("\a")
+		case ruleAction33:
+			p.AddCharacter("\b")
+		case ruleAction34:
+			p.AddCharacter("\x1B")
+		case ruleAction35:
+			p.AddCharacter("\f")
+		case ruleAction36:
+			p.AddCharacter("\n")
+		case ruleAction37:
+			p.AddCharacter("\r")
+		case ruleAction38:
+			p.AddCharacter("\t")
+		case ruleAction39:
+			p.AddCharacter("\v")
+		case ruleAction40:
+			p.AddCharacter("'")
+		case ruleAction41:
+			p.AddCharacter("\"")
+		case ruleAction42:
+			p.AddCharacter("[")
+		case ruleAction43:
+			p.AddCharacter("]")
+		case ruleAction44:
+			p.AddCharacter("-")
+		case ruleAction45:
+			p.AddHexaCharacter(text)
+		case ruleAction46:
+			p.AddOctalCharacter(text)
+		case ruleAction47:
+			p.AddOctalCharacter(text)
+		case ruleAction48:
+			p.AddCharacter("\\")
+
+		}
+	}
+	_, _, _, _, _ = buffer, _buffer, text, begin, end
+}
+
+func Pretty(pretty bool) func(*Peg) error {
+	return func(p *Peg) error {
+		p.Pretty = pretty
+		return nil
+	}
+}
+
+func Size(size int) func(*Peg) error {
+	return func(p *Peg) error {
+		p.tokens32 = tokens32{tree: make([]token32, 0, size)}
+		return nil
+	}
+}
+func (p *Peg) Init(options ...func(*Peg) error) error {
+	var (
+		max                  token32
+		position, tokenIndex uint32
+		buffer               []rune
+	)
+	for _, option := range options {
+		err := option(p)
+		if err != nil {
+			return err
+		}
+	}
+	p.reset = func() {
+		max = token32{}
+		position, tokenIndex = 0, 0
+
+		p.buffer = []rune(p.Buffer)
+		if len(p.buffer) == 0 || p.buffer[len(p.buffer)-1] != endSymbol {
+			p.buffer = append(p.buffer, endSymbol)
+		}
+		buffer = p.buffer
+	}
+	p.reset()
+
+	_rules := p.rules
+	tree := p.tokens32
+	p.parse = func(rule ...int) error {
+		r := 1
+		if len(rule) > 0 {
+			r = rule[0]
+		}
+		matches := p.rules[r]()
+		p.tokens32 = tree
+		if matches {
+			p.Trim(tokenIndex)
+			return nil
+		}
+		return &parseError{p, max}
+	}
+
+	add := func(rule pegRule, begin uint32) {
+		tree.Add(rule, begin, position, tokenIndex)
+		tokenIndex++
+		if begin != position && position > max.end {
+			max = token32{rule, begin, position}
+		}
+	}
+
+	matchDot := func() bool {
+		if buffer[position] != endSymbol {
+			position++
+			return true
+		}
+		return false
+	}
+
+	/*matchChar := func(c byte) bool {
+		if buffer[position] == c {
+			position++
+			return true
+		}
+		return false
+	}*/
+
+	/*matchRange := func(lower byte, upper byte) bool {
+		if c := buffer[position]; c >= lower && c <= upper {
+			position++
+			return true
+		}
+		return false
+	}*/
+
+	_rules = [...]func() bool{
+		nil,
+		/* 0 Grammar <- <(Spacing ('p' 'a' 'c' 'k' 'a' 'g' 'e') MustSpacing Identifier Action0 Import* ('t' 'y' 'p' 'e') MustSpacing Identifier Action1 ('P' 'e' 'g') Spacing Action Action2 Definition+ EndOfFile)> */
+		func() bool {
+			position0, tokenIndex0 := position, tokenIndex
+			{
+				position1 := position
+				if !_rules[ruleSpacing]() {
+					goto l0
+				}
+				if buffer[position] != rune('p') {
+					goto l0
+				}
+				position++
+				if buffer[position] != rune('a') {
+					goto l0
+				}
+				position++
+				if buffer[position] != rune('c') {
+					goto l0
+				}
+				position++
+				if buffer[position] != rune('k') {
+					goto l0
+				}
+				position++
+				if buffer[position] != rune('a') {
+					goto l0
+				}
+				position++
+				if buffer[position] != rune('g') {
+					goto l0
+				}
+				position++
+				if buffer[position] != rune('e') {
+					goto l0
+				}
+				position++
+				if !_rules[ruleMustSpacing]() {
+					goto l0
+				}
+				if !_rules[ruleIdentifier]() {
+					goto l0
+				}
+				{
+					add(ruleAction0, position)
+				}
+			l3:
+				{
+					position4, tokenIndex4 := position, tokenIndex
+					{
+						position5 := position
+						if buffer[position] != rune('i') {
+							goto l4
+						}
+						position++
+						if buffer[position] != rune('m') {
+							goto l4
+						}
+						position++
+						if buffer[position] != rune('p') {
+							goto l4
+						}
+						position++
+						if buffer[position] != rune('o') {
+							goto l4
+						}
+						position++
+						if buffer[position] != rune('r') {
+							goto l4
+						}
+						position++
+						if buffer[position] != rune('t') {
+							goto l4
+						}
+						position++
+						if !_rules[ruleSpacing]() {
+							goto l4
+						}
+						{
+							position6, tokenIndex6 := position, tokenIndex
+							{
+								position8 := position
+								if buffer[position] != rune('(') {
+									goto l7
+								}
+								position++
+								if !_rules[ruleSpacing]() {
+									goto l7
+								}
+							l9:
+								{
+									position10, tokenIndex10 := position, tokenIndex
+									if !_rules[ruleImportName]() {
+										goto l10
+									}
+									if buffer[position] != rune('\n') {
+										goto l10
+									}
+									position++
+									if !_rules[ruleSpacing]() {
+										goto l10
+									}
+									goto l9
+								l10:
+									position, tokenIndex = position10, tokenIndex10
+								}
+								if !_rules[ruleSpacing]() {
+									goto l7
+								}
+								if buffer[position] != rune(')') {
+									goto l7
+								}
+								position++
+								add(ruleMultiImport, position8)
+							}
+							goto l6
+						l7:
+							position, tokenIndex = position6, tokenIndex6
+							{
+								position11 := position
+								if !_rules[ruleImportName]() {
+									goto l4
+								}
+								add(ruleSingleImport, position11)
+							}
+						}
+					l6:
+						if !_rules[ruleSpacing]() {
+							goto l4
+						}
+						add(ruleImport, position5)
+					}
+					goto l3
+				l4:
+					position, tokenIndex = position4, tokenIndex4
+				}
+				if buffer[position] != rune('t') {
+					goto l0
+				}
+				position++
+				if buffer[position] != rune('y') {
+					goto l0
+				}
+				position++
+				if buffer[position] != rune('p') {
+					goto l0
+				}
+				position++
+				if buffer[position] != rune('e') {
+					goto l0
+				}
+				position++
+				if !_rules[ruleMustSpacing]() {
+					goto l0
+				}
+				if !_rules[ruleIdentifier]() {
+					goto l0
+				}
+				{
+					add(ruleAction1, position)
+				}
+				if buffer[position] != rune('P') {
+					goto l0
+				}
+				position++
+				if buffer[position] != rune('e') {
+					goto l0
+				}
+				position++
+				if buffer[position] != rune('g') {
+					goto l0
+				}
+				position++
+				if !_rules[ruleSpacing]() {
+					goto l0
+				}
+				if !_rules[ruleAction]() {
+					goto l0
+				}
+				{
+					add(ruleAction2, position)
+				}
+				{
+					position16 := position
+					if !_rules[ruleIdentifier]() {
+						goto l0
+					}
+					{
+						add(ruleAction4, position)
+					}
+					if !_rules[ruleLeftArrow]() {
+						goto l0
+					}
+					if !_rules[ruleExpression]() {
+						goto l0
+					}
+					{
+						add(ruleAction5, position)
+					}
+					{
+						position19, tokenIndex19 := position, tokenIndex
+						{
+							position20, tokenIndex20 := position, tokenIndex
+							if !_rules[ruleIdentifier]() {
+								goto l21
+							}
+							if !_rules[ruleLeftArrow]() {
+								goto l21
+							}
+							goto l20
+						l21:
+							position, tokenIndex = position20, tokenIndex20
+							{
+								position22, tokenIndex22 := position, tokenIndex
+								if !matchDot() {
+									goto l22
+								}
+								goto l0
+							l22:
+								position, tokenIndex = position22, tokenIndex22
+							}
+						}
+					l20:
+						position, tokenIndex = position19, tokenIndex19
+					}
+					add(ruleDefinition, position16)
+				}
+			l14:
+				{
+					position15, tokenIndex15 := position, tokenIndex
+					{
+						position23 := position
+						if !_rules[ruleIdentifier]() {
+							goto l15
+						}
+						{
+							add(ruleAction4, position)
+						}
+						if !_rules[ruleLeftArrow]() {
+							goto l15
+						}
+						if !_rules[ruleExpression]() {
+							goto l15
+						}
+						{
+							add(ruleAction5, position)
+						}
+						{
+							position26, tokenIndex26 := position, tokenIndex
+							{
+								position27, tokenIndex27 := position, tokenIndex
+								if !_rules[ruleIdentifier]() {
+									goto l28
+								}
+								if !_rules[ruleLeftArrow]() {
+									goto l28
+								}
+								goto l27
+							l28:
+								position, tokenIndex = position27, tokenIndex27
+								{
+									position29, tokenIndex29 := position, tokenIndex
+									if !matchDot() {
+										goto l29
+									}
+									goto l15
+								l29:
+									position, tokenIndex = position29, tokenIndex29
+								}
+							}
+						l27:
+							position, tokenIndex = position26, tokenIndex26
+						}
+						add(ruleDefinition, position23)
+					}
+					goto l14
+				l15:
+					position, tokenIndex = position15, tokenIndex15
+				}
+				{
+					position30 := position
+					{
+						position31, tokenIndex31 := position, tokenIndex
+						if !matchDot() {
+							goto l31
+						}
+						goto l0
+					l31:
+						position, tokenIndex = position31, tokenIndex31
+					}
+					add(ruleEndOfFile, position30)
+				}
+				add(ruleGrammar, position1)
+			}
+			return true
+		l0:
+			position, tokenIndex = position0, tokenIndex0
+			return false
+		},
+		/* 1 Import <- <('i' 'm' 'p' 'o' 'r' 't' Spacing (MultiImport / SingleImport) Spacing)> */
+		nil,
+		/* 2 SingleImport <- <ImportName> */
+		nil,
+		/* 3 MultiImport <- <('(' Spacing (ImportName '\n' Spacing)* Spacing ')')> */
+		nil,
+		/* 4 ImportName <- <('"' <((&('-') '-') | (&('.') '.') | (&('/') '/') | (&('_') '_') | (&('A' | 'B' | 'C' | 'D' | 'E' | 'F' | 'G' | 'H' | 'I' | 'J' | 'K' | 'L' | 'M' | 'N' | 'O' | 'P' | 'Q' | 'R' | 'S' | 'T' | 'U' | 'V' | 'W' | 'X' | 'Y' | 'Z') [A-Z]) | (&('0' | '1' | '2' | '3' | '4' | '5' | '6' | '7' | '8' | '9') [0-9]) | (&('a' | 'b' | 'c' | 'd' | 'e' | 'f' | 'g' | 'h' | 'i' | 'j' | 'k' | 'l' | 'm' | 'n' | 'o' | 'p' | 'q' | 'r' | 's' | 't' | 'u' | 'v' | 'w' | 'x' | 'y' | 'z') [a-z]))+> '"' Action3)> */
+		func() bool {
+			position35, tokenIndex35 := position, tokenIndex
+			{
+				position36 := position
+				if buffer[position] != rune('"') {
+					goto l35
+				}
+				position++
+				{
+					position37 := position
+					{
+						switch buffer[position] {
+						case '-':
+							if buffer[position] != rune('-') {
+								goto l35
+							}
+							position++
+						case '.':
+							if buffer[position] != rune('.') {
+								goto l35
+							}
+							position++
+						case '/':
+							if buffer[position] != rune('/') {
+								goto l35
+							}
+							position++
+						case '_':
+							if buffer[position] != rune('_') {
+								goto l35
+							}
+							position++
+						case 'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N', 'O', 'P', 'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X', 'Y', 'Z':
+							if c := buffer[position]; c < rune('A') || c > rune('Z') {
+								goto l35
+							}
+							position++
+						case '0', '1', '2', '3', '4', '5', '6', '7', '8', '9':
+							if c := buffer[position]; c < rune('0') || c > rune('9') {
+								goto l35
+							}
+							position++
+						default:
+							if c := buffer[position]; c < rune('a') || c > rune('z') {
+								goto l35
+							}
+							position++
+						}
+					}
+
+				l38:
+					{
+						position39, tokenIndex39 := position, tokenIndex
+						{
+							switch buffer[position] {
+							case '-':
+								if buffer[position] != rune('-') {
+									goto l39
+								}
+								position++
+							case '.':
+								if buffer[position] != rune('.') {
+									goto l39
+								}
+								position++
+							case '/':
+								if buffer[position] != rune('/') {
+									goto l39
+								}
+								position++
+							case '_':
+								if buffer[position] != rune('_') {
+									goto l39
+								}
+								position++
+							case 'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N', 'O', 'P', 'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X', 'Y', 'Z':
+								if c := buffer[position]; c < rune('A') || c > rune('Z') {
+									goto l39
+								}
+								position++
+							case '0', '1', '2', '3', '4', '5', '6', '7', '8', '9':
+								if c := buffer[position]; c < rune('0') || c > rune('9') {
+									goto l39
+								}
+								position++
+							default:
+								if c := buffer[position]; c < rune('a') || c > rune('z') {
+									goto l39
+								}
+								position++
+							}
+						}
+
+						goto l38
+					l39:
+						position, tokenIndex = position39, tokenIndex39
+					}
+					add(rulePegText, position37)
+				}
+				if buffer[position] != rune('"') {
+					goto l35
+				}
+				position++
+				{
+					add(ruleAction3, position)
+				}
+				add(ruleImportName, position36)
+			}
+			return true
+		l35:
+			position, tokenIndex = position35, tokenIndex35
+			return false
+		},
+		/* 5 Definition <- <(Identifier Action4 LeftArrow Expression Action5 &((Identifier LeftArrow) / !.))> */
+		nil,
+		/* 6 Expression <- <((Sequence (Slash Sequence Action6)* (Slash Action7)?) / Action8)> */
+		func() bool {
+			{
+				position45 := position
+				{
+					position46, tokenIndex46 := position, tokenIndex
+					if !_rules[ruleSequence]() {
+						goto l47
+					}
+				l48:
+					{
+						position49, tokenIndex49 := position, tokenIndex
+						if !_rules[ruleSlash]() {
+							goto l49
+						}
+						if !_rules[ruleSequence]() {
+							goto l49
+						}
+						{
+							add(ruleAction6, position)
+						}
+						goto l48
+					l49:
+						position, tokenIndex = position49, tokenIndex49
+					}
+					{
+						position51, tokenIndex51 := position, tokenIndex
+						if !_rules[ruleSlash]() {
+							goto l51
+						}
+						{
+							add(ruleAction7, position)
+						}
+						goto l52
+					l51:
+						position, tokenIndex = position51, tokenIndex51
+					}
+				l52:
+					goto l46
+				l47:
+					position, tokenIndex = position46, tokenIndex46
+					{
+						add(ruleAction8, position)
+					}
+				}
+			l46:
+				add(ruleExpression, position45)
+			}
+			return true
+		},
+		/* 7 Sequence <- <(Prefix (Prefix Action9)*)> */
+		func() bool {
+			position55, tokenIndex55 := position, tokenIndex
+			{
+				position56 := position
+				if !_rules[rulePrefix]() {
+					goto l55
+				}
+			l57:
+				{
+					position58, tokenIndex58 := position, tokenIndex
+					if !_rules[rulePrefix]() {
+						goto l58
+					}
+					{
+						add(ruleAction9, position)
+					}
+					goto l57
+				l58:
+					position, tokenIndex = position58, tokenIndex58
+				}
+				add(ruleSequence, position56)
+			}
+			return true
+		l55:
+			position, tokenIndex = position55, tokenIndex55
+			return false
+		},
+		/* 8 Prefix <- <((And Action Action10) / (Not Action Action11) / ((&('!') (Not Suffix Action13)) | (&('&') (And Suffix Action12)) | (&('"' | '\'' | '(' | '.' | '<' | 'A' | 'B' | 'C' | 'D' | 'E' | 'F' | 'G' | 'H' | 'I' | 'J' | 'K' | 'L' | 'M' | 'N' | 'O' | 'P' | 'Q' | 'R' | 'S' | 'T' | 'U' | 'V' | 'W' | 'X' | 'Y' | 'Z' | '[' | '_' | 'a' | 'b' | 'c' | 'd' | 'e' | 'f' | 'g' | 'h' | 'i' | 'j' | 'k' | 'l' | 'm' | 'n' | 'o' | 'p' | 'q' | 'r' | 's' | 't' | 'u' | 'v' | 'w' | 'x' | 'y' | 'z' | '{') Suffix)))> */
+		func() bool {
+			position60, tokenIndex60 := position, tokenIndex
+			{
+				position61 := position
+				{
+					position62, tokenIndex62 := position, tokenIndex
+					if !_rules[ruleAnd]() {
+						goto l63
+					}
+					if !_rules[ruleAction]() {
+						goto l63
+					}
+					{
+						add(ruleAction10, position)
+					}
+					goto l62
+				l63:
+					position, tokenIndex = position62, tokenIndex62
+					if !_rules[ruleNot]() {
+						goto l65
+					}
+					if !_rules[ruleAction]() {
+						goto l65
+					}
+					{
+						add(ruleAction11, position)
+					}
+					goto l62
+				l65:
+					position, tokenIndex = position62, tokenIndex62
+					{
+						switch buffer[position] {
+						case '!':
+							if !_rules[ruleNot]() {
+								goto l60
+							}
+							if !_rules[ruleSuffix]() {
+								goto l60
+							}
+							{
+								add(ruleAction13, position)
+							}
+						case '&':
+							if !_rules[ruleAnd]() {
+								goto l60
+							}
+							if !_rules[ruleSuffix]() {
+								goto l60
+							}
+							{
+								add(ruleAction12, position)
+							}
+						default:
+							if !_rules[ruleSuffix]() {
+								goto l60
+							}
+						}
+					}
+
+				}
+			l62:
+				add(rulePrefix, position61)
+			}
+			return true
+		l60:
+			position, tokenIndex = position60, tokenIndex60
+			return false
+		},
+		/* 9 Suffix <- <(Primary ((&('+') (Plus Action16)) | (&('*') (Star Action15)) | (&('?') (Question Action14)))?)> */
+		func() bool {
+			position70, tokenIndex70 := position, tokenIndex
+			{
+				position71 := position
+				{
+					position72 := position
+					{
+						switch buffer[position] {
+						case '<':
+							{
+								position74 := position
+								if buffer[position] != rune('<') {
+									goto l70
+								}
+								position++
+								if !_rules[ruleSpacing]() {
+									goto l70
+								}
+								add(ruleBegin, position74)
+							}
+							if !_rules[ruleExpression]() {
+								goto l70
+							}
+							{
+								position75 := position
+								if buffer[position] != rune('>') {
+									goto l70
+								}
+								position++
+								if !_rules[ruleSpacing]() {
+									goto l70
+								}
+								add(ruleEnd, position75)
+							}
+							{
+								add(ruleAction20, position)
+							}
+						case '{':
+							if !_rules[ruleAction]() {
+								goto l70
+							}
+							{
+								add(ruleAction19, position)
+							}
+						case '.':
+							{
+								position78 := position
+								if buffer[position] != rune('.') {
+									goto l70
+								}
+								position++
+								if !_rules[ruleSpacing]() {
+									goto l70
+								}
+								add(ruleDot, position78)
+							}
+							{
+								add(ruleAction18, position)
+							}
+						case '[':
+							{
+								position80 := position
+								{
+									position81, tokenIndex81 := position, tokenIndex
+									if buffer[position] != rune('[') {
+										goto l82
+									}
+									position++
+									if buffer[position] != rune('[') {
+										goto l82
+									}
+									position++
+									{
+										position83, tokenIndex83 := position, tokenIndex
+										{
+											position85, tokenIndex85 := position, tokenIndex
+											if buffer[position] != rune('^') {
+												goto l86
+											}
+											position++
+											if !_rules[ruleDoubleRanges]() {
+												goto l86
+											}
+											{
+												add(ruleAction23, position)
+											}
+											goto l85
+										l86:
+											position, tokenIndex = position85, tokenIndex85
+											if !_rules[ruleDoubleRanges]() {
+												goto l83
+											}
+										}
+									l85:
+										goto l84
+									l83:
+										position, tokenIndex = position83, tokenIndex83
+									}
+								l84:
+									if buffer[position] != rune(']') {
+										goto l82
+									}
+									position++
+									if buffer[position] != rune(']') {
+										goto l82
+									}
+									position++
+									goto l81
+								l82:
+									position, tokenIndex = position81, tokenIndex81
+									if buffer[position] != rune('[') {
+										goto l70
+									}
+									position++
+									{
+										position88, tokenIndex88 := position, tokenIndex
+										{
+											position90, tokenIndex90 := position, tokenIndex
+											if buffer[position] != rune('^') {
+												goto l91
+											}
+											position++
+											if !_rules[ruleRanges]() {
+												goto l91
+											}
+											{
+												add(ruleAction24, position)
+											}
+											goto l90
+										l91:
+											position, tokenIndex = position90, tokenIndex90
+											if !_rules[ruleRanges]() {
+												goto l88
+											}
+										}
+									l90:
+										goto l89
+									l88:
+										position, tokenIndex = position88, tokenIndex88
+									}
+								l89:
+									if buffer[position] != rune(']') {
+										goto l70
+									}
+									position++
+								}
+							l81:
+								if !_rules[ruleSpacing]() {
+									goto l70
+								}
+								add(ruleClass, position80)
+							}
+						case '"', '\'':
+							{
+								position93 := position
+								{
+									position94, tokenIndex94 := position, tokenIndex
+									if buffer[position] != rune('\'') {
+										goto l95
+									}
+									position++
+									{
+										position96, tokenIndex96 := position, tokenIndex
+										{
+											position98, tokenIndex98 := position, tokenIndex
+											if buffer[position] != rune('\'') {
+												goto l98
+											}
+											position++
+											goto l96
+										l98:
+											position, tokenIndex = position98, tokenIndex98
+										}
+										if !_rules[ruleChar]() {
+											goto l96
+										}
+										goto l97
+									l96:
+										position, tokenIndex = position96, tokenIndex96
+									}
+								l97:
+								l99:
+									{
+										position100, tokenIndex100 := position, tokenIndex
+										{
+											position101, tokenIndex101 := position, tokenIndex
+											if buffer[position] != rune('\'') {
+												goto l101
+											}
+											position++
+											goto l100
+										l101:
+											position, tokenIndex = position101, tokenIndex101
+										}
+										if !_rules[ruleChar]() {
+											goto l100
+										}
+										{
+											add(ruleAction21, position)
+										}
+										goto l99
+									l100:
+										position, tokenIndex = position100, tokenIndex100
+									}
+									if buffer[position] != rune('\'') {
+										goto l95
+									}
+									position++
+									if !_rules[ruleSpacing]() {
+										goto l95
+									}
+									goto l94
+								l95:
+									position, tokenIndex = position94, tokenIndex94
+									if buffer[position] != rune('"') {
+										goto l70
+									}
+									position++
+									{
+										position103, tokenIndex103 := position, tokenIndex
+										{
+											position105, tokenIndex105 := position, tokenIndex
+											if buffer[position] != rune('"') {
+												goto l105
+											}
+											position++
+											goto l103
+										l105:
+											position, tokenIndex = position105, tokenIndex105
+										}
+										if !_rules[ruleDoubleChar]() {
+											goto l103
+										}
+										goto l104
+									l103:
+										position, tokenIndex = position103, tokenIndex103
+									}
+								l104:
+								l106:
+									{
+										position107, tokenIndex107 := position, tokenIndex
+										{
+											position108, tokenIndex108 := position, tokenIndex
+											if buffer[position] != rune('"') {
+												goto l108
+											}
+											position++
+											goto l107
+										l108:
+											position, tokenIndex = position108, tokenIndex108
+										}
+										if !_rules[ruleDoubleChar]() {
+											goto l107
+										}
+										{
+											add(ruleAction22, position)
+										}
+										goto l106
+									l107:
+										position, tokenIndex = position107, tokenIndex107
+									}
+									if buffer[position] != rune('"') {
+										goto l70
+									}
+									position++
+									if !_rules[ruleSpacing]() {
+										goto l70
+									}
+								}
+							l94:
+								add(ruleLiteral, position93)
+							}
+						case '(':
+							{
+								position110 := position
+								if buffer[position] != rune('(') {
+									goto l70
+								}
+								position++
+								if !_rules[ruleSpacing]() {
+									goto l70
+								}
+								add(ruleOpen, position110)
+							}
+							if !_rules[ruleExpression]() {
+								goto l70
+							}
+							{
+								position111 := position
+								if buffer[position] != rune(')') {
+									goto l70
+								}
+								position++
+								if !_rules[ruleSpacing]() {
+									goto l70
+								}
+								add(ruleClose, position111)
+							}
+						default:
+							if !_rules[ruleIdentifier]() {
+								goto l70
+							}
+							{
+								position112, tokenIndex112 := position, tokenIndex
+								if !_rules[ruleLeftArrow]() {
+									goto l112
+								}
+								goto l70
+							l112:
+								position, tokenIndex = position112, tokenIndex112
+							}
+							{
+								add(ruleAction17, position)
+							}
+						}
+					}
+
+					add(rulePrimary, position72)
+				}
+				{
+					position114, tokenIndex114 := position, tokenIndex
+					{
+						switch buffer[position] {
+						case '+':
+							{
+								position117 := position
+								if buffer[position] != rune('+') {
+									goto l114
+								}
+								position++
+								if !_rules[ruleSpacing]() {
+									goto l114
+								}
+								add(rulePlus, position117)
+							}
+							{
+								add(ruleAction16, position)
+							}
+						case '*':
+							{
+								position119 := position
+								if buffer[position] != rune('*') {
+									goto l114
+								}
+								position++
+								if !_rules[ruleSpacing]() {
+									goto l114
+								}
+								add(ruleStar, position119)
+							}
+							{
+								add(ruleAction15, position)
+							}
+						default:
+							{
+								position121 := position
+								if buffer[position] != rune('?') {
+									goto l114
+								}
+								position++
+								if !_rules[ruleSpacing]() {
+									goto l114
+								}
+								add(ruleQuestion, position121)
+							}
+							{
+								add(ruleAction14, position)
+							}
+						}
+					}
+
+					goto l115
+				l114:
+					position, tokenIndex = position114, tokenIndex114
+				}
+			l115:
+				add(ruleSuffix, position71)
+			}
+			return true
+		l70:
+			position, tokenIndex = position70, tokenIndex70
+			return false
+		},
+		/* 10 Primary <- <((&('<') (Begin Expression End Action20)) | (&('{') (Action Action19)) | (&('.') (Dot Action18)) | (&('[') Class) | (&('"' | '\'') Literal) | (&('(') (Open Expression Close)) | (&('A' | 'B' | 'C' | 'D' | 'E' | 'F' | 'G' | 'H' | 'I' | 'J' | 'K' | 'L' | 'M' | 'N' | 'O' | 'P' | 'Q' | 'R' | 'S' | 'T' | 'U' | 'V' | 'W' | 'X' | 'Y' | 'Z' | '_' | 'a' | 'b' | 'c' | 'd' | 'e' | 'f' | 'g' | 'h' | 'i' | 'j' | 'k' | 'l' | 'm' | 'n' | 'o' | 'p' | 'q' | 'r' | 's' | 't' | 'u' | 'v' | 'w' | 'x' | 'y' | 'z') (Identifier !LeftArrow Action17)))> */
+		nil,
+		/* 11 Identifier <- <(<(IdentStart IdentCont*)> Spacing)> */
+		func() bool {
+			position124, tokenIndex124 := position, tokenIndex
+			{
+				position125 := position
+				{
+					position126 := position
+					if !_rules[ruleIdentStart]() {
+						goto l124
+					}
+				l127:
+					{
+						position128, tokenIndex128 := position, tokenIndex
+						{
+							position129 := position
+							{
+								position130, tokenIndex130 := position, tokenIndex
+								if !_rules[ruleIdentStart]() {
+									goto l131
+								}
+								goto l130
+							l131:
+								position, tokenIndex = position130, tokenIndex130
+								if c := buffer[position]; c < rune('0') || c > rune('9') {
+									goto l128
+								}
+								position++
+							}
+						l130:
+							add(ruleIdentCont, position129)
+						}
+						goto l127
+					l128:
+						position, tokenIndex = position128, tokenIndex128
+					}
+					add(rulePegText, position126)
+				}
+				if !_rules[ruleSpacing]() {
+					goto l124
+				}
+				add(ruleIdentifier, position125)
+			}
+			return true
+		l124:
+			position, tokenIndex = position124, tokenIndex124
+			return false
+		},
+		/* 12 IdentStart <- <((&('_') '_') | (&('A' | 'B' | 'C' | 'D' | 'E' | 'F' | 'G' | 'H' | 'I' | 'J' | 'K' | 'L' | 'M' | 'N' | 'O' | 'P' | 'Q' | 'R' | 'S' | 'T' | 'U' | 'V' | 'W' | 'X' | 'Y' | 'Z') [A-Z]) | (&('a' | 'b' | 'c' | 'd' | 'e' | 'f' | 'g' | 'h' | 'i' | 'j' | 'k' | 'l' | 'm' | 'n' | 'o' | 'p' | 'q' | 'r' | 's' | 't' | 'u' | 'v' | 'w' | 'x' | 'y' | 'z') [a-z]))> */
+		func() bool {
+			position132, tokenIndex132 := position, tokenIndex
+			{
+				position133 := position
+				{
+					switch buffer[position] {
+					case '_':
+						if buffer[position] != rune('_') {
+							goto l132
+						}
+						position++
+					case 'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N', 'O', 'P', 'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X', 'Y', 'Z':
+						if c := buffer[position]; c < rune('A') || c > rune('Z') {
+							goto l132
+						}
+						position++
+					default:
+						if c := buffer[position]; c < rune('a') || c > rune('z') {
+							goto l132
+						}
+						position++
+					}
+				}
+
+				add(ruleIdentStart, position133)
+			}
+			return true
+		l132:
+			position, tokenIndex = position132, tokenIndex132
+			return false
+		},
+		/* 13 IdentCont <- <(IdentStart / [0-9])> */
+		nil,
+		/* 14 Literal <- <(('\'' (!'\'' Char)? (!'\'' Char Action21)* '\'' Spacing) / ('"' (!'"' DoubleChar)? (!'"' DoubleChar Action22)* '"' Spacing))> */
+		nil,
+		/* 15 Class <- <((('[' '[' (('^' DoubleRanges Action23) / DoubleRanges)? (']' ']')) / ('[' (('^' Ranges Action24) / Ranges)? ']')) Spacing)> */
+		nil,
+		/* 16 Ranges <- <(!']' Range (!']' Range Action25)*)> */
+		func() bool {
+			position138, tokenIndex138 := position, tokenIndex
+			{
+				position139 := position
+				{
+					position140, tokenIndex140 := position, tokenIndex
+					if buffer[position] != rune(']') {
+						goto l140
+					}
+					position++
+					goto l138
+				l140:
+					position, tokenIndex = position140, tokenIndex140
+				}
+				if !_rules[ruleRange]() {
+					goto l138
+				}
+			l141:
+				{
+					position142, tokenIndex142 := position, tokenIndex
+					{
+						position143, tokenIndex143 := position, tokenIndex
+						if buffer[position] != rune(']') {
+							goto l143
+						}
+						position++
+						goto l142
+					l143:
+						position, tokenIndex = position143, tokenIndex143
+					}
+					if !_rules[ruleRange]() {
+						goto l142
+					}
+					{
+						add(ruleAction25, position)
+					}
+					goto l141
+				l142:
+					position, tokenIndex = position142, tokenIndex142
+				}
+				add(ruleRanges, position139)
+			}
+			return true
+		l138:
+			position, tokenIndex = position138, tokenIndex138
+			return false
+		},
+		/* 17 DoubleRanges <- <(!(']' ']') DoubleRange (!(']' ']') DoubleRange Action26)*)> */
+		func() bool {
+			position145, tokenIndex145 := position, tokenIndex
+			{
+				position146 := position
+				{
+					position147, tokenIndex147 := position, tokenIndex
+					if buffer[position] != rune(']') {
+						goto l147
+					}
+					position++
+					if buffer[position] != rune(']') {
+						goto l147
+					}
+					position++
+					goto l145
+				l147:
+					position, tokenIndex = position147, tokenIndex147
+				}
+				if !_rules[ruleDoubleRange]() {
+					goto l145
+				}
+			l148:
+				{
+					position149, tokenIndex149 := position, tokenIndex
+					{
+						position150, tokenIndex150 := position, tokenIndex
+						if buffer[position] != rune(']') {
+							goto l150
+						}
+						position++
+						if buffer[position] != rune(']') {
+							goto l150
+						}
+						position++
+						goto l149
+					l150:
+						position, tokenIndex = position150, tokenIndex150
+					}
+					if !_rules[ruleDoubleRange]() {
+						goto l149
+					}
+					{
+						add(ruleAction26, position)
+					}
+					goto l148
+				l149:
+					position, tokenIndex = position149, tokenIndex149
+				}
+				add(ruleDoubleRanges, position146)
+			}
+			return true
+		l145:
+			position, tokenIndex = position145, tokenIndex145
+			return false
+		},
+		/* 18 Range <- <((Char '-' Char Action27) / Char)> */
+		func() bool {
+			position152, tokenIndex152 := position, tokenIndex
+			{
+				position153 := position
+				{
+					position154, tokenIndex154 := position, tokenIndex
+					if !_rules[ruleChar]() {
+						goto l155
+					}
+					if buffer[position] != rune('-') {
+						goto l155
+					}
+					position++
+					if !_rules[ruleChar]() {
+						goto l155
+					}
+					{
+						add(ruleAction27, position)
+					}
+					goto l154
+				l155:
+					position, tokenIndex = position154, tokenIndex154
+					if !_rules[ruleChar]() {
+						goto l152
+					}
+				}
+			l154:
+				add(ruleRange, position153)
+			}
+			return true
+		l152:
+			position, tokenIndex = position152, tokenIndex152
+			return false
+		},
+		/* 19 DoubleRange <- <((Char '-' Char Action28) / DoubleChar)> */
+		func() bool {
+			position157, tokenIndex157 := position, tokenIndex
+			{
+				position158 := position
+				{
+					position159, tokenIndex159 := position, tokenIndex
+					if !_rules[ruleChar]() {
+						goto l160
+					}
+					if buffer[position] != rune('-') {
+						goto l160
+					}
+					position++
+					if !_rules[ruleChar]() {
+						goto l160
+					}
+					{
+						add(ruleAction28, position)
+					}
+					goto l159
+				l160:
+					position, tokenIndex = position159, tokenIndex159
+					if !_rules[ruleDoubleChar]() {
+						goto l157
+					}
+				}
+			l159:
+				add(ruleDoubleRange, position158)
+			}
+			return true
+		l157:
+			position, tokenIndex = position157, tokenIndex157
+			return false
+		},
+		/* 20 Char <- <(Escape / (!'\\' <.> Action29))> */
+		func() bool {
+			position162, tokenIndex162 := position, tokenIndex
+			{
+				position163 := position
+				{
+					position164, tokenIndex164 := position, tokenIndex
+					if !_rules[ruleEscape]() {
+						goto l165
+					}
+					goto l164
+				l165:
+					position, tokenIndex = position164, tokenIndex164
+					{
+						position166, tokenIndex166 := position, tokenIndex
+						if buffer[position] != rune('\\') {
+							goto l166
+						}
+						position++
+						goto l162
+					l166:
+						position, tokenIndex = position166, tokenIndex166
+					}
+					{
+						position167 := position
+						if !matchDot() {
+							goto l162
+						}
+						add(rulePegText, position167)
+					}
+					{
+						add(ruleAction29, position)
+					}
+				}
+			l164:
+				add(ruleChar, position163)
+			}
+			return true
+		l162:
+			position, tokenIndex = position162, tokenIndex162
+			return false
+		},
+		/* 21 DoubleChar <- <(Escape / (<([a-z] / [A-Z])> Action30) / (!'\\' <.> Action31))> */
+		func() bool {
+			position169, tokenIndex169 := position, tokenIndex
+			{
+				position170 := position
+				{
+					position171, tokenIndex171 := position, tokenIndex
+					if !_rules[ruleEscape]() {
+						goto l172
+					}
+					goto l171
+				l172:
+					position, tokenIndex = position171, tokenIndex171
+					{
+						position174 := position
+						{
+							position175, tokenIndex175 := position, tokenIndex
+							if c := buffer[position]; c < rune('a') || c > rune('z') {
+								goto l176
+							}
+							position++
+							goto l175
+						l176:
+							position, tokenIndex = position175, tokenIndex175
+							if c := buffer[position]; c < rune('A') || c > rune('Z') {
+								goto l173
+							}
+							position++
+						}
+					l175:
+						add(rulePegText, position174)
+					}
+					{
+						add(ruleAction30, position)
+					}
+					goto l171
+				l173:
+					position, tokenIndex = position171, tokenIndex171
+					{
+						position178, tokenIndex178 := position, tokenIndex
+						if buffer[position] != rune('\\') {
+							goto l178
+						}
+						position++
+						goto l169
+					l178:
+						position, tokenIndex = position178, tokenIndex178
+					}
+					{
+						position179 := position
+						if !matchDot() {
+							goto l169
+						}
+						add(rulePegText, position179)
+					}
+					{
+						add(ruleAction31, position)
+					}
+				}
+			l171:
+				add(ruleDoubleChar, position170)
+			}
+			return true
+		l169:
+			position, tokenIndex = position169, tokenIndex169
+			return false
+		},
+		/* 22 Escape <- <(('\\' ('a' / 'A') Action32) / ('\\' ('b' / 'B') Action33) / ('\\' ('e' / 'E') Action34) / ('\\' ('f' / 'F') Action35) / ('\\' ('n' / 'N') Action36) / ('\\' ('r' / 'R') Action37) / ('\\' ('t' / 'T') Action38) / ('\\' ('v' / 'V') Action39) / ('\\' '\'' Action40) / ('\\' '"' Action41) / ('\\' '[' Action42) / ('\\' ']' Action43) / ('\\' '-' Action44) / ('\\' ('0' ('x' / 'X')) <((&('A' | 'B' | 'C' | 'D' | 'E' | 'F') [A-F]) | (&('a' | 'b' | 'c' | 'd' | 'e' | 'f') [a-f]) | (&('0' | '1' | '2' | '3' | '4' | '5' | '6' | '7' | '8' | '9') [0-9]))+> Action45) / ('\\' <([0-3] [0-7] [0-7])> Action46) / ('\\' <([0-7] [0-7]?)> Action47) / ('\\' '\\' Action48))> */
+		func() bool {
+			position181, tokenIndex181 := position, tokenIndex
+			{
+				position182 := position
+				{
+					position183, tokenIndex183 := position, tokenIndex
+					if buffer[position] != rune('\\') {
+						goto l184
+					}
+					position++
+					{
+						position185, tokenIndex185 := position, tokenIndex
+						if buffer[position] != rune('a') {
+							goto l186
+						}
+						position++
+						goto l185
+					l186:
+						position, tokenIndex = position185, tokenIndex185
+						if buffer[position] != rune('A') {
+							goto l184
+						}
+						position++
+					}
+				l185:
+					{
+						add(ruleAction32, position)
+					}
+					goto l183
+				l184:
+					position, tokenIndex = position183, tokenIndex183
+					if buffer[position] != rune('\\') {
+						goto l188
+					}
+					position++
+					{
+						position189, tokenIndex189 := position, tokenIndex
+						if buffer[position] != rune('b') {
+							goto l190
+						}
+						position++
+						goto l189
+					l190:
+						position, tokenIndex = position189, tokenIndex189
+						if buffer[position] != rune('B') {
+							goto l188
+						}
+						position++
+					}
+				l189:
+					{
+						add(ruleAction33, position)
+					}
+					goto l183
+				l188:
+					position, tokenIndex = position183, tokenIndex183
+					if buffer[position] != rune('\\') {
+						goto l192
+					}
+					position++
+					{
+						position193, tokenIndex193 := position, tokenIndex
+						if buffer[position] != rune('e') {
+							goto l194
+						}
+						position++
+						goto l193
+					l194:
+						position, tokenIndex = position193, tokenIndex193
+						if buffer[position] != rune('E') {
+							goto l192
+						}
+						position++
+					}
+				l193:
+					{
+						add(ruleAction34, position)
+					}
+					goto l183
+				l192:
+					position, tokenIndex = position183, tokenIndex183
+					if buffer[position] != rune('\\') {
+						goto l196
+					}
+					position++
+					{
+						position197, tokenIndex197 := position, tokenIndex
+						if buffer[position] != rune('f') {
+							goto l198
+						}
+						position++
+						goto l197
+					l198:
+						position, tokenIndex = position197, tokenIndex197
+						if buffer[position] != rune('F') {
+							goto l196
+						}
+						position++
+					}
+				l197:
+					{
+						add(ruleAction35, position)
+					}
+					goto l183
+				l196:
+					position, tokenIndex = position183, tokenIndex183
+					if buffer[position] != rune('\\') {
+						goto l200
+					}
+					position++
+					{
+						position201, tokenIndex201 := position, tokenIndex
+						if buffer[position] != rune('n') {
+							goto l202
+						}
+						position++
+						goto l201
+					l202:
+						position, tokenIndex = position201, tokenIndex201
+						if buffer[position] != rune('N') {
+							goto l200
+						}
+						position++
+					}
+				l201:
+					{
+						add(ruleAction36, position)
+					}
+					goto l183
+				l200:
+					position, tokenIndex = position183, tokenIndex183
+					if buffer[position] != rune('\\') {
+						goto l204
+					}
+					position++
+					{
+						position205, tokenIndex205 := position, tokenIndex
+						if buffer[position] != rune('r') {
+							goto l206
+						}
+						position++
+						goto l205
+					l206:
+						position, tokenIndex = position205, tokenIndex205
+						if buffer[position] != rune('R') {
+							goto l204
+						}
+						position++
+					}
+				l205:
+					{
+						add(ruleAction37, position)
+					}
+					goto l183
+				l204:
+					position, tokenIndex = position183, tokenIndex183
+					if buffer[position] != rune('\\') {
+						goto l208
+					}
+					position++
+					{
+						position209, tokenIndex209 := position, tokenIndex
+						if buffer[position] != rune('t') {
+							goto l210
+						}
+						position++
+						goto l209
+					l210:
+						position, tokenIndex = position209, tokenIndex209
+						if buffer[position] != rune('T') {
+							goto l208
+						}
+						position++
+					}
+				l209:
+					{
+						add(ruleAction38, position)
+					}
+					goto l183
+				l208:
+					position, tokenIndex = position183, tokenIndex183
+					if buffer[position] != rune('\\') {
+						goto l212
+					}
+					position++
+					{
+						position213, tokenIndex213 := position, tokenIndex
+						if buffer[position] != rune('v') {
+							goto l214
+						}
+						position++
+						goto l213
+					l214:
+						position, tokenIndex = position213, tokenIndex213
+						if buffer[position] != rune('V') {
+							goto l212
+						}
+						position++
+					}
+				l213:
+					{
+						add(ruleAction39, position)
+					}
+					goto l183
+				l212:
+					position, tokenIndex = position183, tokenIndex183
+					if buffer[position] != rune('\\') {
+						goto l216
+					}
+					position++
+					if buffer[position] != rune('\'') {
+						goto l216
+					}
+					position++
+					{
+						add(ruleAction40, position)
+					}
+					goto l183
+				l216:
+					position, tokenIndex = position183, tokenIndex183
+					if buffer[position] != rune('\\') {
+						goto l218
+					}
+					position++
+					if buffer[position] != rune('"') {
+						goto l218
+					}
+					position++
+					{
+						add(ruleAction41, position)
+					}
+					goto l183
+				l218:
+					position, tokenIndex = position183, tokenIndex183
+					if buffer[position] != rune('\\') {
+						goto l220
+					}
+					position++
+					if buffer[position] != rune('[') {
+						goto l220
+					}
+					position++
+					{
+						add(ruleAction42, position)
+					}
+					goto l183
+				l220:
+					position, tokenIndex = position183, tokenIndex183
+					if buffer[position] != rune('\\') {
+						goto l222
+					}
+					position++
+					if buffer[position] != rune(']') {
+						goto l222
+					}
+					position++
+					{
+						add(ruleAction43, position)
+					}
+					goto l183
+				l222:
+					position, tokenIndex = position183, tokenIndex183
+					if buffer[position] != rune('\\') {
+						goto l224
+					}
+					position++
+					if buffer[position] != rune('-') {
+						goto l224
+					}
+					position++
+					{
+						add(ruleAction44, position)
+					}
+					goto l183
+				l224:
+					position, tokenIndex = position183, tokenIndex183
+					if buffer[position] != rune('\\') {
+						goto l226
+					}
+					position++
+					if buffer[position] != rune('0') {
+						goto l226
+					}
+					position++
+					{
+						position227, tokenIndex227 := position, tokenIndex
+						if buffer[position] != rune('x') {
+							goto l228
+						}
+						position++
+						goto l227
+					l228:
+						position, tokenIndex = position227, tokenIndex227
+						if buffer[position] != rune('X') {
+							goto l226
+						}
+						position++
+					}
+				l227:
+					{
+						position229 := position
+						{
+							switch buffer[position] {
+							case 'A', 'B', 'C', 'D', 'E', 'F':
+								if c := buffer[position]; c < rune('A') || c > rune('F') {
+									goto l226
+								}
+								position++
+							case 'a', 'b', 'c', 'd', 'e', 'f':
+								if c := buffer[position]; c < rune('a') || c > rune('f') {
+									goto l226
+								}
+								position++
+							default:
+								if c := buffer[position]; c < rune('0') || c > rune('9') {
+									goto l226
+								}
+								position++
+							}
+						}
+
+					l230:
+						{
+							position231, tokenIndex231 := position, tokenIndex
+							{
+								switch buffer[position] {
+								case 'A', 'B', 'C', 'D', 'E', 'F':
+									if c := buffer[position]; c < rune('A') || c > rune('F') {
+										goto l231
+									}
+									position++
+								case 'a', 'b', 'c', 'd', 'e', 'f':
+									if c := buffer[position]; c < rune('a') || c > rune('f') {
+										goto l231
+									}
+									position++
+								default:
+									if c := buffer[position]; c < rune('0') || c > rune('9') {
+										goto l231
+									}
+									position++
+								}
+							}
+
+							goto l230
+						l231:
+							position, tokenIndex = position231, tokenIndex231
+						}
+						add(rulePegText, position229)
+					}
+					{
+						add(ruleAction45, position)
+					}
+					goto l183
+				l226:
+					position, tokenIndex = position183, tokenIndex183
+					if buffer[position] != rune('\\') {
+						goto l235
+					}
+					position++
+					{
+						position236 := position
+						if c := buffer[position]; c < rune('0') || c > rune('3') {
+							goto l235
+						}
+						position++
+						if c := buffer[position]; c < rune('0') || c > rune('7') {
+							goto l235
+						}
+						position++
+						if c := buffer[position]; c < rune('0') || c > rune('7') {
+							goto l235
+						}
+						position++
+						add(rulePegText, position236)
+					}
+					{
+						add(ruleAction46, position)
+					}
+					goto l183
+				l235:
+					position, tokenIndex = position183, tokenIndex183
+					if buffer[position] != rune('\\') {
+						goto l238
+					}
+					position++
+					{
+						position239 := position
+						if c := buffer[position]; c < rune('0') || c > rune('7') {
+							goto l238
+						}
+						position++
+						{
+							position240, tokenIndex240 := position, tokenIndex
+							if c := buffer[position]; c < rune('0') || c > rune('7') {
+								goto l240
+							}
+							position++
+							goto l241
+						l240:
+							position, tokenIndex = position240, tokenIndex240
+						}
+					l241:
+						add(rulePegText, position239)
+					}
+					{
+						add(ruleAction47, position)
+					}
+					goto l183
+				l238:
+					position, tokenIndex = position183, tokenIndex183
+					if buffer[position] != rune('\\') {
+						goto l181
+					}
+					position++
+					if buffer[position] != rune('\\') {
+						goto l181
+					}
+					position++
+					{
+						add(ruleAction48, position)
+					}
+				}
+			l183:
+				add(ruleEscape, position182)
+			}
+			return true
+		l181:
+			position, tokenIndex = position181, tokenIndex181
+			return false
+		},
+		/* 23 LeftArrow <- <((('<' '-') / '←') Spacing)> */
+		func() bool {
+			position244, tokenIndex244 := position, tokenIndex
+			{
+				position245 := position
+				{
+					position246, tokenIndex246 := position, tokenIndex
+					if buffer[position] != rune('<') {
+						goto l247
+					}
+					position++
+					if buffer[position] != rune('-') {
+						goto l247
+					}
+					position++
+					goto l246
+				l247:
+					position, tokenIndex = position246, tokenIndex246
+					if buffer[position] != rune('←') {
+						goto l244
+					}
+					position++
+				}
+			l246:
+				if !_rules[ruleSpacing]() {
+					goto l244
+				}
+				add(ruleLeftArrow, position245)
+			}
+			return true
+		l244:
+			position, tokenIndex = position244, tokenIndex244
+			return false
+		},
+		/* 24 Slash <- <('/' Spacing)> */
+		func() bool {
+			position248, tokenIndex248 := position, tokenIndex
+			{
+				position249 := position
+				if buffer[position] != rune('/') {
+					goto l248
+				}
+				position++
+				if !_rules[ruleSpacing]() {
+					goto l248
+				}
+				add(ruleSlash, position249)
+			}
+			return true
+		l248:
+			position, tokenIndex = position248, tokenIndex248
+			return false
+		},
+		/* 25 And <- <('&' Spacing)> */
+		func() bool {
+			position250, tokenIndex250 := position, tokenIndex
+			{
+				position251 := position
+				if buffer[position] != rune('&') {
+					goto l250
+				}
+				position++
+				if !_rules[ruleSpacing]() {
+					goto l250
+				}
+				add(ruleAnd, position251)
+			}
+			return true
+		l250:
+			position, tokenIndex = position250, tokenIndex250
+			return false
+		},
+		/* 26 Not <- <('!' Spacing)> */
+		func() bool {
+			position252, tokenIndex252 := position, tokenIndex
+			{
+				position253 := position
+				if buffer[position] != rune('!') {
+					goto l252
+				}
+				position++
+				if !_rules[ruleSpacing]() {
+					goto l252
+				}
+				add(ruleNot, position253)
+			}
+			return true
+		l252:
+			position, tokenIndex = position252, tokenIndex252
+			return false
+		},
+		/* 27 Question <- <('?' Spacing)> */
+		nil,
+		/* 28 Star <- <('*' Spacing)> */
+		nil,
+		/* 29 Plus <- <('+' Spacing)> */
+		nil,
+		/* 30 Open <- <('(' Spacing)> */
+		nil,
+		/* 31 Close <- <(')' Spacing)> */
+		nil,
+		/* 32 Dot <- <('.' Spacing)> */
+		nil,
+		/* 33 SpaceComment <- <(Space / Comment)> */
+		func() bool {
+			position260, tokenIndex260 := position, tokenIndex
+			{
+				position261 := position
+				{
+					position262, tokenIndex262 := position, tokenIndex
+					{
+						position264 := position
+						{
+							switch buffer[position] {
+							case '\t':
+								if buffer[position] != rune('\t') {
+									goto l263
+								}
+								position++
+							case ' ':
+								if buffer[position] != rune(' ') {
+									goto l263
+								}
+								position++
+							default:
+								if !_rules[ruleEndOfLine]() {
+									goto l263
+								}
+							}
+						}
+
+						add(ruleSpace, position264)
+					}
+					goto l262
+				l263:
+					position, tokenIndex = position262, tokenIndex262
+					{
+						position266 := position
+						{
+							position267, tokenIndex267 := position, tokenIndex
+							if buffer[position] != rune('#') {
+								goto l268
+							}
+							position++
+							goto l267
+						l268:
+							position, tokenIndex = position267, tokenIndex267
+							if buffer[position] != rune('/') {
+								goto l260
+							}
+							position++
+							if buffer[position] != rune('/') {
+								goto l260
+							}
+							position++
+						}
+					l267:
+					l269:
+						{
+							position270, tokenIndex270 := position, tokenIndex
+							{
+								position271, tokenIndex271 := position, tokenIndex
+								if !_rules[ruleEndOfLine]() {
+									goto l271
+								}
+								goto l270
+							l271:
+								position, tokenIndex = position271, tokenIndex271
+							}
+							if !matchDot() {
+								goto l270
+							}
+							goto l269
+						l270:
+							position, tokenIndex = position270, tokenIndex270
+						}
+						if !_rules[ruleEndOfLine]() {
+							goto l260
+						}
+						add(ruleComment, position266)
+					}
+				}
+			l262:
+				add(ruleSpaceComment, position261)
+			}
+			return true
+		l260:
+			position, tokenIndex = position260, tokenIndex260
+			return false
+		},
+		/* 34 Spacing <- <SpaceComment*> */
+		func() bool {
+			{
+				position273 := position
+			l274:
+				{
+					position275, tokenIndex275 := position, tokenIndex
+					if !_rules[ruleSpaceComment]() {
+						goto l275
+					}
+					goto l274
+				l275:
+					position, tokenIndex = position275, tokenIndex275
+				}
+				add(ruleSpacing, position273)
+			}
+			return true
+		},
+		/* 35 MustSpacing <- <SpaceComment+> */
+		func() bool {
+			position276, tokenIndex276 := position, tokenIndex
+			{
+				position277 := position
+				if !_rules[ruleSpaceComment]() {
+					goto l276
+				}
+			l278:
+				{
+					position279, tokenIndex279 := position, tokenIndex
+					if !_rules[ruleSpaceComment]() {
+						goto l279
+					}
+					goto l278
+				l279:
+					position, tokenIndex = position279, tokenIndex279
+				}
+				add(ruleMustSpacing, position277)
+			}
+			return true
+		l276:
+			position, tokenIndex = position276, tokenIndex276
+			return false
+		},
+		/* 36 Comment <- <(('#' / ('/' '/')) (!EndOfLine .)* EndOfLine)> */
+		nil,
+		/* 37 Space <- <((&('\t') '\t') | (&(' ') ' ') | (&('\n' | '\r') EndOfLine))> */
+		nil,
+		/* 38 EndOfLine <- <(('\r' '\n') / '\n' / '\r')> */
+		func() bool {
+			position282, tokenIndex282 := position, tokenIndex
+			{
+				position283 := position
+				{
+					position284, tokenIndex284 := position, tokenIndex
+					if buffer[position] != rune('\r') {
+						goto l285
+					}
+					position++
+					if buffer[position] != rune('\n') {
+						goto l285
+					}
+					position++
+					goto l284
+				l285:
+					position, tokenIndex = position284, tokenIndex284
+					if buffer[position] != rune('\n') {
+						goto l286
+					}
+					position++
+					goto l284
+				l286:
+					position, tokenIndex = position284, tokenIndex284
+					if buffer[position] != rune('\r') {
+						goto l282
+					}
+					position++
+				}
+			l284:
+				add(ruleEndOfLine, position283)
+			}
+			return true
+		l282:
+			position, tokenIndex = position282, tokenIndex282
+			return false
+		},
+		/* 39 EndOfFile <- <!.> */
+		nil,
+		/* 40 Action <- <('{' <ActionBody*> '}' Spacing)> */
+		func() bool {
+			position288, tokenIndex288 := position, tokenIndex
+			{
+				position289 := position
+				if buffer[position] != rune('{') {
+					goto l288
+				}
+				position++
+				{
+					position290 := position
+				l291:
+					{
+						position292, tokenIndex292 := position, tokenIndex
+						if !_rules[ruleActionBody]() {
+							goto l292
+						}
+						goto l291
+					l292:
+						position, tokenIndex = position292, tokenIndex292
+					}
+					add(rulePegText, position290)
+				}
+				if buffer[position] != rune('}') {
+					goto l288
+				}
+				position++
+				if !_rules[ruleSpacing]() {
+					goto l288
+				}
+				add(ruleAction, position289)
+			}
+			return true
+		l288:
+			position, tokenIndex = position288, tokenIndex288
+			return false
+		},
+		/* 41 ActionBody <- <((!('{' / '}') .) / ('{' ActionBody* '}'))> */
+		func() bool {
+			position293, tokenIndex293 := position, tokenIndex
+			{
+				position294 := position
+				{
+					position295, tokenIndex295 := position, tokenIndex
+					{
+						position297, tokenIndex297 := position, tokenIndex
+						{
+							position298, tokenIndex298 := position, tokenIndex
+							if buffer[position] != rune('{') {
+								goto l299
+							}
+							position++
+							goto l298
+						l299:
+							position, tokenIndex = position298, tokenIndex298
+							if buffer[position] != rune('}') {
+								goto l297
+							}
+							position++
+						}
+					l298:
+						goto l296
+					l297:
+						position, tokenIndex = position297, tokenIndex297
+					}
+					if !matchDot() {
+						goto l296
+					}
+					goto l295
+				l296:
+					position, tokenIndex = position295, tokenIndex295
+					if buffer[position] != rune('{') {
+						goto l293
+					}
+					position++
+				l300:
+					{
+						position301, tokenIndex301 := position, tokenIndex
+						if !_rules[ruleActionBody]() {
+							goto l301
+						}
+						goto l300
+					l301:
+						position, tokenIndex = position301, tokenIndex301
+					}
+					if buffer[position] != rune('}') {
+						goto l293
+					}
+					position++
+				}
+			l295:
+				add(ruleActionBody, position294)
+			}
+			return true
+		l293:
+			position, tokenIndex = position293, tokenIndex293
+			return false
+		},
+		/* 42 Begin <- <('<' Spacing)> */
+		nil,
+		/* 43 End <- <('>' Spacing)> */
+		nil,
+		/* 45 Action0 <- <{ p.AddPackage(text) }> */
+		nil,
+		/* 46 Action1 <- <{ p.AddPeg(text) }> */
+		nil,
+		/* 47 Action2 <- <{ p.AddState(text) }> */
+		nil,
+		nil,
+		/* 49 Action3 <- <{ p.AddImport(text) }> */
+		nil,
+		/* 50 Action4 <- <{ p.AddRule(text) }> */
+		nil,
+		/* 51 Action5 <- <{ p.AddExpression() }> */
+		nil,
+		/* 52 Action6 <- <{ p.AddAlternate() }> */
+		nil,
+		/* 53 Action7 <- <{ p.AddNil(); p.AddAlternate() }> */
+		nil,
+		/* 54 Action8 <- <{ p.AddNil() }> */
+		nil,
+		/* 55 Action9 <- <{ p.AddSequence() }> */
+		nil,
+		/* 56 Action10 <- <{ p.AddPredicate(text) }> */
+		nil,
+		/* 57 Action11 <- <{ p.AddStateChange(text) }> */
+		nil,
+		/* 58 Action12 <- <{ p.AddPeekFor() }> */
+		nil,
+		/* 59 Action13 <- <{ p.AddPeekNot() }> */
+		nil,
+		/* 60 Action14 <- <{ p.AddQuery() }> */
+		nil,
+		/* 61 Action15 <- <{ p.AddStar() }> */
+		nil,
+		/* 62 Action16 <- <{ p.AddPlus() }> */
+		nil,
+		/* 63 Action17 <- <{ p.AddName(text) }> */
+		nil,
+		/* 64 Action18 <- <{ p.AddDot() }> */
+		nil,
+		/* 65 Action19 <- <{ p.AddAction(text) }> */
+		nil,
+		/* 66 Action20 <- <{ p.AddPush() }> */
+		nil,
+		/* 67 Action21 <- <{ p.AddSequence() }> */
+		nil,
+		/* 68 Action22 <- <{ p.AddSequence() }> */
+		nil,
+		/* 69 Action23 <- <{ p.AddPeekNot(); p.AddDot(); p.AddSequence() }> */
+		nil,
+		/* 70 Action24 <- <{ p.AddPeekNot(); p.AddDot(); p.AddSequence() }> */
+		nil,
+		/* 71 Action25 <- <{ p.AddAlternate() }> */
+		nil,
+		/* 72 Action26 <- <{ p.AddAlternate() }> */
+		nil,
+		/* 73 Action27 <- <{ p.AddRange() }> */
+		nil,
+		/* 74 Action28 <- <{ p.AddDoubleRange() }> */
+		nil,
+		/* 75 Action29 <- <{ p.AddCharacter(text) }> */
+		nil,
+		/* 76 Action30 <- <{ p.AddDoubleCharacter(text) }> */
+		nil,
+		/* 77 Action31 <- <{ p.AddCharacter(text) }> */
+		nil,
+		/* 78 Action32 <- <{ p.AddCharacter("\a") }> */
+		nil,
+		/* 79 Action33 <- <{ p.AddCharacter("\b") }> */
+		nil,
+		/* 80 Action34 <- <{ p.AddCharacter("\x1B") }> */
+		nil,
+		/* 81 Action35 <- <{ p.AddCharacter("\f") }> */
+		nil,
+		/* 82 Action36 <- <{ p.AddCharacter("\n") }> */
+		nil,
+		/* 83 Action37 <- <{ p.AddCharacter("\r") }> */
+		nil,
+		/* 84 Action38 <- <{ p.AddCharacter("\t") }> */
+		nil,
+		/* 85 Action39 <- <{ p.AddCharacter("\v") }> */
+		nil,
+		/* 86 Action40 <- <{ p.AddCharacter("'") }> */
+		nil,
+		/* 87 Action41 <- <{ p.AddCharacter("\"") }> */
+		nil,
+		/* 88 Action42 <- <{ p.AddCharacter("[") }> */
+		nil,
+		/* 89 Action43 <- <{ p.AddCharacter("]") }> */
+		nil,
+		/* 90 Action44 <- <{ p.AddCharacter("-") }> */
+		nil,
+		/* 91 Action45 <- <{ p.AddHexaCharacter(text) }> */
+		nil,
+		/* 92 Action46 <- <{ p.AddOctalCharacter(text) }> */
+		nil,
+		/* 93 Action47 <- <{ p.AddOctalCharacter(text) }> */
+		nil,
+		/* 94 Action48 <- <{ p.AddCharacter("\\") }> */
+		nil,
+	}
+	p.rules = _rules
+	return nil
+}
diff --git a/peg_test.go b/peg_test.go
index 2da7f6e..c91d347 100644
--- a/peg_test.go
+++ b/peg_test.go
@@ -3,7 +3,10 @@ package main
 import (
 	"bytes"
 	"io/ioutil"
+	"os"
 	"testing"
+
+	"github.com/pointlander/peg/tree"
 )
 
 func TestCorrect(t *testing.T) {
@@ -11,12 +14,19 @@ func TestCorrect(t *testing.T) {
 type T Peg {}
 Grammar <- !.
 `
-	p := &Peg{Tree: New(false, false), Buffer: buffer}
+	p := &Peg{Tree: tree.New(false, false, false), Buffer: buffer}
 	p.Init()
 	err := p.Parse()
 	if err != nil {
 		t.Error(err)
 	}
+
+	p = &Peg{Tree: tree.New(false, false, false), Buffer: buffer}
+	p.Init(Size(1<<15))
+	err = p.Parse()
+	if err != nil {
+		t.Error(err)
+	}
 }
 
 func TestNoSpacePackage(t *testing.T) {
@@ -24,8 +34,8 @@ func TestNoSpacePackage(t *testing.T) {
 type T Peg {}
 Grammar <- !.
 `
-	p := &Peg{Tree: New(false, false), Buffer: buffer}
-	p.Init()
+	p := &Peg{Tree: tree.New(false, false, false), Buffer: buffer}
+	p.Init(Size(1<<15))
 	err := p.Parse()
 	if err == nil {
 		t.Error("packagenospace was parsed without error")
@@ -38,8 +48,8 @@ package p
 typenospace Peg {}
 Grammar <- !.
 `
-	p := &Peg{Tree: New(false, false), Buffer: buffer}
-	p.Init()
+	p := &Peg{Tree: tree.New(false, false, false), Buffer: buffer}
+	p.Init(Size(1<<15))
 	err := p.Parse()
 	if err == nil {
 		t.Error("typenospace was parsed without error")
@@ -52,43 +62,139 @@ func TestSame(t *testing.T) {
 		t.Error(err)
 	}
 
-	p := &Peg{Tree: New(true, true), Buffer: string(buffer)}
-	p.Init()
-	if err := p.Parse(); err != nil {
+	p := &Peg{Tree: tree.New(true, true, false), Buffer: string(buffer)}
+	p.Init(Size(1<<15))
+	if err = p.Parse(); err != nil {
 		t.Error(err)
 	}
 
 	p.Execute()
 
 	out := &bytes.Buffer{}
-	p.Compile("peg.peg.go", out)
+	p.Compile("peg.peg.go", []string{"./peg", "-inline", "-switch", "peg.peg"}, out)
 
-	bootstrap, err := ioutil.ReadFile("bootstrap.peg.go")
+	bootstrap, err := ioutil.ReadFile("peg.peg.go")
 	if err != nil {
 		t.Error(err)
 	}
 
 	if len(out.Bytes()) != len(bootstrap) {
-		t.Error("code generated from peg.peg is not the same as bootstrap.peg.go")
+		t.Error("code generated from peg.peg is not the same as .go")
 		return
 	}
 
 	for i, v := range out.Bytes() {
 		if v != bootstrap[i] {
-			t.Error("code generated from peg.peg is not the same as bootstrap.peg.go")
+			t.Error("code generated from peg.peg is not the same as .go")
 			return
 		}
 	}
 }
 
+func TestStrict(t *testing.T) {
+	tt := []string{
+		// rule used but not defined
+		`
+package main
+type test Peg {}
+Begin <- begin !.
+`,
+		// rule defined but not used
+		`
+package main
+type test Peg {}
+Begin <- .
+unused <- 'unused'
+`,
+		// left recursive rule
+		`package main
+type test Peg {}
+Begin <- Begin 'x'
+`,
+	}
+
+	for i, buffer := range tt {
+		p := &Peg{Tree: tree.New(false, false, false), Buffer: buffer}
+		p.Init(Size(1<<15))
+		if err := p.Parse(); err != nil {
+			t.Fatal(err)
+		}
+		p.Execute()
+
+		f, err := ioutil.TempFile("", "peg")
+		if err != nil {
+			t.Fatal(err)
+		}
+		defer func() {
+			os.Remove(f.Name())
+			f.Close()
+		}()
+		out := &bytes.Buffer{}
+		p.Strict = true
+		if err = p.Compile(f.Name(), []string{"peg"}, out); err == nil {
+			t.Fatalf("#%d: expected warning error", i)
+		}
+		p.Strict = false
+		if err = p.Compile(f.Name(), []string{"peg"}, out); err != nil {
+			t.Fatalf("#%d: unexpected error (%v)", i, err)
+		}
+	}
+}
+
+var files = [...]string{
+	"peg.peg",
+	"grammars/c/c.peg",
+	"grammars/calculator/calculator.peg",
+	"grammars/fexl/fexl.peg",
+	"grammars/java/java_1_7.peg",
+}
+
+func BenchmarkInitOnly(b *testing.B) {
+	pegs := []string{}
+	for _, file := range files {
+		input, err := ioutil.ReadFile(file)
+		if err != nil {
+			b.Error(err)
+		}
+		pegs = append(pegs, string(input))
+	}
+
+	b.ResetTimer()
+	for i := 0; i < b.N; i++ {
+		for _, peg := range pegs {
+			p := &Peg{Tree: tree.New(true, true, false), Buffer: peg}
+			p.Init(Size(1<<15))
+		}
+	}
+}
+
 func BenchmarkParse(b *testing.B) {
-	files := [...]string{
-		"peg.peg",
-		"grammars/c/c.peg",
-		"grammars/calculator/calculator.peg",
-		"grammars/fexl/fexl.peg",
-		"grammars/java/java_1_7.peg",
+	pegs := make([]*Peg, len(files))
+	for i, file := range files {
+		input, err := ioutil.ReadFile(file)
+		if err != nil {
+			b.Error(err)
+		}
+
+		p := &Peg{Tree: tree.New(true, true, false), Buffer: string(input)}
+		p.Init(Size(1<<15))
+		pegs[i] = p
 	}
+
+	b.ResetTimer()
+	for i := 0; i < b.N; i++ {
+		for _, peg := range pegs {
+			if err := peg.Parse(); err != nil {
+				b.Error(err)
+			}
+			b.StopTimer()
+			peg.Reset()
+			b.StartTimer()
+		}
+	}
+}
+
+func BenchmarkResetAndParse(b *testing.B) {
 	pegs := make([]*Peg, len(files))
 	for i, file := range files {
 		input, err := ioutil.ReadFile(file)
@@ -96,18 +202,63 @@ func BenchmarkParse(b *testing.B) {
 			b.Error(err)
 		}
 
-		p := &Peg{Tree: New(true, true), Buffer: string(input)}
-		p.Init()
+		p := &Peg{Tree: tree.New(true, true, false), Buffer: string(input)}
+		p.Init(Size(1<<15))
 		pegs[i] = p
 	}
 
 	b.ResetTimer()
 	for i := 0; i < b.N; i++ {
 		for _, peg := range pegs {
+			if err := peg.Parse(); err != nil {
+				b.Error(err)
+			}
 			peg.Reset()
+		}
+	}
+}
+
+func BenchmarkInitAndParse(b *testing.B) {
+	strs := []string{}
+	for _, file := range files {
+		input, err := ioutil.ReadFile(file)
+		if err != nil {
+			b.Error(err)
+		}
+		strs = append(strs, string(input))
+	}
+
+	b.ResetTimer()
+	for i := 0; i < b.N; i++ {
+		for _, str := range strs {
+			peg := &Peg{Tree: tree.New(true, true, false), Buffer: str}
+			peg.Init(Size(1<<15))
 			if err := peg.Parse(); err != nil {
 				b.Error(err)
 			}
 		}
 	}
 }
+
+func BenchmarkInitResetAndParse(b *testing.B) {
+	strs := []string{}
+	for _, file := range files {
+		input, err := ioutil.ReadFile(file)
+		if err != nil {
+			b.Error(err)
+		}
+		strs = append(strs, string(input))
+	}
+
+	b.ResetTimer()
+	for i := 0; i < b.N; i++ {
+		for _, str := range strs {
+			peg := &Peg{Tree: tree.New(true, true, false), Buffer: str}
+			peg.Init(Size(1<<15))
+			if err := peg.Parse(); err != nil {
+				b.Error(err)
+			}
+			peg.Reset()
+		}
+	}
+}
diff --git a/peg.go b/tree/peg.go
similarity index 74%
rename from peg.go
rename to tree/peg.go
index 360b3dd..c5ffb1c 100644
--- a/peg.go
+++ b/tree/peg.go
@@ -2,7 +2,7 @@
 // Use of this source code is governed by a BSD-style
 // license that can be found in the LICENSE file.
 
-package main
+package tree
 
 import (
 	"bytes"
@@ -13,6 +13,7 @@ import (
 	"io"
 	"math"
 	"os"
+	"sort"
 	"strconv"
 	"strings"
 	"text/template"
@@ -22,6 +23,9 @@ import (
 
 const pegHeaderTemplate = `package {{.PackageName}}
 
+// Code generated by {{.Generator}} DO NOT EDIT.
+
+
 import (
 	{{range .Imports}}"{{.}}"
 	{{end}}
@@ -36,131 +40,82 @@ const (
 	ruleUnknown pegRule = iota
 	{{range .RuleNames}}rule{{.String}}
 	{{end}}
-	rulePre
-	ruleIn
-	ruleSuf
 )
 
 var rul3s = [...]string {
 	"Unknown",
 	{{range .RuleNames}}"{{.String}}",
 	{{end}}
-	"Pre_",
-	"_In_",
-	"_Suf",
 }
 
+type token32 struct {
+	pegRule
+	begin, end uint32
+}
+
+func (t *token32) String() string {
+	return fmt.Sprintf("\x1B[34m%v\x1B[m %v %v", rul3s[t.pegRule], t.begin, t.end)
+}
+
+{{if .Ast}}
 type node32 struct {
 	token32
 	up, next *node32
 }
 
-func (node *node32) print(depth int, buffer string) {
-	for node != nil {
-		for c := 0; c < depth; c++ {
-			fmt.Printf(" ")
-		}
-		fmt.Printf("\x1B[34m%v\x1B[m %v\n", rul3s[node.pegRule], strconv.Quote(string(([]rune(buffer)[node.begin:node.end]))))
-		if node.up != nil {
-			node.up.print(depth + 1, buffer)
+func (node *node32) print(w io.Writer, pretty bool, buffer string) {
+	var print func(node *node32, depth int)
+	print = func(node *node32, depth int) {
+		for node != nil {
+			for c := 0; c < depth; c++ {
+				fmt.Fprintf(w, " ")
+			}
+			rule := rul3s[node.pegRule]
+			quote := strconv.Quote(string(([]rune(buffer)[node.begin:node.end])))
+			if !pretty {
+				fmt.Fprintf(w, "%v %v\n", rule, quote)
+			} else {
+				fmt.Fprintf(w, "\x1B[36m%v\x1B[m %v\n", rule, quote)
+			}
+			if node.up != nil {
+				print(node.up, depth + 1)
+			}
+			node = node.next
 		}
-		node = node.next
 	}
+	print(node, 0)
 }
 
-func (node *node32) Print(buffer string) {
-	node.print(0, buffer)
-}
-
-type element struct {
-	node *node32
-	down *element
+func (node *node32) Print(w io.Writer, buffer string) {
+	node.print(w, false, buffer)
 }
 
-{{range .Sizes}}
-
-/* ${@} bit structure for abstract syntax tree */
-type token{{.}} struct {
-	pegRule
-	begin, end, next uint{{.}}
+func (node *node32) PrettyPrint(w io.Writer, buffer string) {
+	node.print(w, true, buffer)
 }
 
-func (t *token{{.}}) isZero() bool {
-	return t.pegRule == ruleUnknown && t.begin == 0 && t.end == 0 && t.next == 0
+type tokens32 struct {
+	tree		[]token32
 }
 
-func (t *token{{.}}) isParentOf(u token{{.}}) bool {
-	return t.begin <= u.begin && t.end >= u.end && t.next > u.next
+func (t *tokens32) Trim(length uint32) {
+	t.tree = t.tree[:length]
 }
 
-func (t *token{{.}}) getToken32() token32 {
-	return token32{pegRule: t.pegRule, begin: uint32(t.begin), end: uint32(t.end), next: uint32(t.next)}
-}
-
-func (t *token{{.}}) String() string {
-	return fmt.Sprintf("\x1B[34m%v\x1B[m %v %v %v", rul3s[t.pegRule], t.begin, t.end, t.next)
-}
-
-type tokens{{.}} struct {
-	tree		[]token{{.}}
-	ordered		[][]token{{.}}
-}
-
-func (t *tokens{{.}}) trim(length int) {
-	t.tree = t.tree[0:length]
-}
-
-func (t *tokens{{.}}) Print() {
+func (t *tokens32) Print() {
 	for _, token := range t.tree {
 		fmt.Println(token.String())
 	}
 }
 
-func (t *tokens{{.}}) Order() [][]token{{.}} {
-	if t.ordered != nil {
-		return t.ordered
-	}
-
-	depths := make([]int{{.}}, 1, math.MaxInt16)
-	for i, token := range t.tree {
-		if token.pegRule == ruleUnknown {
-			t.tree = t.tree[:i]
-			break
-		}
-		depth := int(token.next)
-		if length := len(depths); depth >= length {
-			depths = depths[:depth + 1]
-		}
-		depths[depth]++
-	}
-	depths = append(depths, 0)
-
-	ordered, pool := make([][]token{{.}}, len(depths)), make([]token{{.}}, len(t.tree) + len(depths))
-	for i, depth := range depths {
-		depth++
-		ordered[i], pool, depths[i] = pool[:depth], pool[depth:], 0
-	}
-
-	for i, token := range t.tree {
-		depth := token.next
-		token.next = uint{{.}}(i)
-		ordered[depth][depths[depth]] = token
-		depths[depth]++
+func (t *tokens32) AST() *node32 {
+	type element struct {
+		node *node32
+		down *element
 	}
-	t.ordered = ordered
-	return ordered
-}
-
-type state{{.}} struct {
-	token{{.}}
-	depths []int{{.}}
-	leaf bool
-}
-
-func (t *tokens{{.}}) AST() *node32 {
 	tokens := t.Tokens()
-	stack := &element{node: &node32{token32:<-tokens}}
-	for token := range tokens {
+	var stack *element
+	for _, token := range tokens {
 		if token.begin == token.end {
 			continue
 		}
@@ -172,179 +127,57 @@ func (t *tokens{{.}}) AST() *node32 {
 		}
 		stack = &element{node: node, down: stack}
 	}
-	return stack.node
-}
-
-func (t *tokens{{.}}) PreOrder() (<-chan state{{.}}, [][]token{{.}}) {
-	s, ordered := make(chan state{{.}}, 6), t.Order()
-	go func() {
-		var states [8]state{{.}}
-		for i := range states {
-			states[i].depths = make([]int{{.}}, len(ordered))
-		}
-		depths, state, depth := make([]int{{.}}, len(ordered)), 0, 1
-		write := func(t token{{.}}, leaf bool) {
-			S := states[state]
-			state, S.pegRule, S.begin, S.end, S.next, S.leaf = (state + 1) % 8, t.pegRule, t.begin, t.end, uint{{.}}(depth), leaf
-			copy(S.depths, depths)
-			s <- S
-		}
-
-		states[state].token{{.}} = ordered[0][0]
-		depths[0]++
-		state++
-		a, b := ordered[depth - 1][depths[depth - 1] - 1], ordered[depth][depths[depth]]
-		depthFirstSearch: for {
-			for {
-				if i := depths[depth]; i > 0 {
-					if c, j := ordered[depth][i - 1], depths[depth - 1]; a.isParentOf(c) &&
-						(j < 2 || !ordered[depth - 1][j - 2].isParentOf(c)) {
-						if c.end != b.begin {
-							write(token{{.}} {pegRule: ruleIn, begin: c.end, end: b.begin}, true)
-						}
-						break
-					}
-				}
-
-				if a.begin < b.begin {
-					write(token{{.}} {pegRule: rulePre, begin: a.begin, end: b.begin}, true)
-				}
-				break
-			}
-
-			next := depth + 1
-			if c := ordered[next][depths[next]]; c.pegRule != ruleUnknown && b.isParentOf(c) {
-				write(b, false)
-				depths[depth]++
-				depth, a, b = next, b, c
-				continue
-			}
-
-			write(b, true)
-			depths[depth]++
-			c, parent := ordered[depth][depths[depth]], true
-			for {
-				if c.pegRule != ruleUnknown && a.isParentOf(c) {
-					b = c
-					continue depthFirstSearch
-				} else if parent && b.end != a.end {
-					write(token{{.}} {pegRule: ruleSuf, begin: b.end, end: a.end}, true)
-				}
-
-				depth--
-				if depth > 0 {
-					a, b, c = ordered[depth - 1][depths[depth - 1] - 1], a, ordered[depth][depths[depth]]
-					parent = a.isParentOf(b)
-					continue
-				}
-
-				break depthFirstSearch
-			}
-		}
-
-		close(s)
-	}()
-	return s, ordered
-}
-
-func (t *tokens{{.}}) PrintSyntax() {
-	tokens, ordered := t.PreOrder()
-	max := -1
-	for token := range tokens {
-		if !token.leaf {
-			fmt.Printf("%v", token.begin)
-			for i, leaf, depths := 0, int(token.next), token.depths; i < leaf; i++ {
-				fmt.Printf(" \x1B[36m%v\x1B[m", rul3s[ordered[i][depths[i] - 1].pegRule])
-			}
-			fmt.Printf(" \x1B[36m%v\x1B[m\n", rul3s[token.pegRule])
-		} else if token.begin == token.end {
-			fmt.Printf("%v", token.begin)
-			for i, leaf, depths := 0, int(token.next), token.depths; i < leaf; i++ {
-				fmt.Printf(" \x1B[31m%v\x1B[m", rul3s[ordered[i][depths[i] - 1].pegRule])
-			}
-			fmt.Printf(" \x1B[31m%v\x1B[m\n", rul3s[token.pegRule])
-		} else {
-			for c, end := token.begin, token.end; c < end; c++ {
-				if i := int(c); max + 1 < i {
-					for j := max; j < i; j++ {
-						fmt.Printf("skip %v %v\n", j, token.String())
-					}
-					max = i
-				} else if i := int(c); i <= max {
-					for j := i; j <= max; j++ {
-						fmt.Printf("dupe %v %v\n", j, token.String())
-					}
-				} else {
-					max = int(c)
-				}
-				fmt.Printf("%v", c)
-				for i, leaf, depths := 0, int(token.next), token.depths; i < leaf; i++ {
-					fmt.Printf(" \x1B[34m%v\x1B[m", rul3s[ordered[i][depths[i] - 1].pegRule])
-				}
-				fmt.Printf(" \x1B[34m%v\x1B[m\n", rul3s[token.pegRule])
-			}
-			fmt.Printf("\n")
-		}
+	if stack != nil {
+		return stack.node
 	}
+	return nil
 }
 
-func (t *tokens{{.}}) PrintSyntaxTree(buffer string) {
-	tokens, _ := t.PreOrder()
-	for token := range tokens {
-		for c := 0; c < int(token.next); c++ {
-			fmt.Printf(" ")
-		}
-		fmt.Printf("\x1B[34m%v\x1B[m %v\n", rul3s[token.pegRule], strconv.Quote(string(([]rune(buffer)[token.begin:token.end]))))
-	}
+func (t *tokens32) PrintSyntaxTree(buffer string) {
+	t.AST().Print(os.Stdout, buffer)
 }
 
-func (t *tokens{{.}}) Add(rule pegRule, begin, end, depth uint32, index int) {
-	t.tree[index] = token{{.}}{pegRule: rule, begin: uint{{.}}(begin), end: uint{{.}}(end), next: uint{{.}}(depth)}
+func (t *tokens32) WriteSyntaxTree(w io.Writer, buffer string) {
+	t.AST().Print(w, buffer)
 }
 
-func (t *tokens{{.}}) Tokens() <-chan token32 {
-	s := make(chan token32, 16)
-	go func() {
-		for _, v := range t.tree {
-			s <- v.getToken32()
-		}
-		close(s)
-	}()
-	return s
+func (t *tokens32) PrettyPrintSyntaxTree(buffer string) {
+	t.AST().PrettyPrint(os.Stdout, buffer)
 }
 
-func (t *tokens{{.}}) Error() []token32 {
-	ordered := t.Order()
-	length := len(ordered)
-	tokens, length := make([]token32, length), length - 1
-	for i := range tokens {
-		o := ordered[length - i]
-		if len(o) > 1 {
-			tokens[i] = o[len(o) - 2].getToken32()
-		}
+func (t *tokens32) Add(rule pegRule, begin, end, index uint32) {
+	tree, i := t.tree, int(index)
+	if i >= len(tree) {
+		t.tree = append(tree, token32{pegRule: rule, begin: begin, end: end})
+		return
 	}
-	return tokens
+	tree[i] = token32{pegRule: rule, begin: begin, end: end}
 }
-{{end}}
 
-func (t *tokens32) Expand(index int) {
-	tree := t.tree
-	if index >= len(tree) {
-		expanded := make([]token32, 2 * len(tree))
-		copy(expanded, tree)
-		t.tree = expanded
-	}
+func (t *tokens32) Tokens() []token32 {
+	return t.tree
 }
+{{end}}
 
 type {{.StructName}} struct {
 	{{.StructVariables}}
 	Buffer		string
 	buffer		[]rune
 	rules		[{{.RulesCount}}]func() bool
-	Parse		func(rule ...int) error
-	Reset		func()
+	parse		func(rule ...int) error
+	reset		func()
 	Pretty 	bool
+{{if .Ast -}}
 	tokens32
+{{end -}}
+}
+
+func (p *{{.StructName}}) Parse(rule ...int) error {
+	return p.parse(rule...)
+}
+
+func (p *{{.StructName}}) Reset() {
+	p.reset()
 }
 
 type textPosition struct {
@@ -375,7 +208,7 @@ type parseError struct {
 }
 
 func (e *parseError) Error() string {
-	tokens, error := []token32{e.max}, "\n"
+	tokens, err := []token32{e.max}, "\n"
 	positions, p := make([]int, 2 * len(tokens)), 0
 	for _, token := range tokens {
 		positions[p], p = int(token.begin), p + 1
@@ -388,28 +221,39 @@ func (e *parseError) Error() string {
 	}
 	for _, token := range tokens {
 		begin, end := int(token.begin), int(token.end)
-		error += fmt.Sprintf(format,
+		err += fmt.Sprintf(format,
                          rul3s[token.pegRule],
                          translations[begin].line, translations[begin].symbol,
                          translations[end].line, translations[end].symbol,
                          strconv.Quote(string(e.p.buffer[begin:end])))
 	}
 
-	return error
+	return err
 }
 
+{{if .Ast}}
 func (p *{{.StructName}}) PrintSyntaxTree() {
-	p.tokens32.PrintSyntaxTree(p.Buffer)
+	if p.Pretty {
+		p.tokens32.PrettyPrintSyntaxTree(p.Buffer)
+	} else {
+		p.tokens32.PrintSyntaxTree(p.Buffer)
+	}
+}
+
+func (p *{{.StructName}}) WriteSyntaxTree(w io.Writer) {
+	p.tokens32.WriteSyntaxTree(w, p.Buffer)
 }
 
-func (p *{{.StructName}}) Highlighter() {
-	p.PrintSyntax()
+func (p *{{.StructName}}) SprintSyntaxTree() string {
+	var bldr strings.Builder
+	p.WriteSyntaxTree(&bldr)
+	return bldr.String()
 }
 
 {{if .HasActions}}
 func (p *{{.StructName}}) Execute() {
 	buffer, _buffer, text, begin, end := p.Buffer, p.buffer, "", 0, 0
-	for token := range p.Tokens() {
+	for _, token := range p.Tokens() {
 		switch (token.pegRule) {
 		{{if .HasPush}}
 		case rulePegText:
@@ -424,41 +268,82 @@ func (p *{{.StructName}}) Execute() {
 	_, _, _, _, _ = buffer, _buffer, text, begin, end
 }
 {{end}}
+{{end}}
 
-func (p *{{.StructName}}) Init() {
-	p.buffer = []rune(p.Buffer)
-	if len(p.buffer) == 0 || p.buffer[len(p.buffer) - 1] != endSymbol {
-		p.buffer = append(p.buffer, endSymbol)
+func Pretty(pretty bool) func(*{{.StructName}}) error {
+	return func(p *{{.StructName}}) error {
+		p.Pretty = pretty
+		return nil
 	}
+}
+
+{{if .Ast -}}
+func Size(size int) func(*{{.StructName}}) error {
+	return func(p *{{.StructName}}) error {
+		p.tokens32 = tokens32{tree: make([]token32, 0, size)}
+		return nil
+	}
+}
+{{end -}}
+
+func (p *{{.StructName}}) Init(options ...func(*{{.StructName}}) error) error {
+	var (
+		max token32
+		position, tokenIndex uint32
+		buffer []rune
+{{if not .Ast -}}
+{{if .HasPush -}}
+		text string
+{{end -}}
+{{end -}}
+	)
+	for _, option := range options {
+		err := option(p)
+		if err != nil {
+			return err
+		}
+	}
+	p.reset = func() {
+		max = token32{}
+		position, tokenIndex = 0, 0
 
-	tree := tokens32{tree: make([]token32, math.MaxInt16)}
-	var max token32
-	position, depth, tokenIndex, buffer, _rules := uint32(0), uint32(0), 0, p.buffer, p.rules
+		p.buffer = []rune(p.Buffer)
+		if len(p.buffer) == 0 || p.buffer[len(p.buffer) - 1] != endSymbol {
+			p.buffer = append(p.buffer, endSymbol)
+		}
+		buffer = p.buffer
+	}
+	p.reset()
 
-	p.Parse = func(rule ...int) error {
+	_rules := p.rules
+{{if .Ast -}}
+	tree := p.tokens32
+{{end -}}
+	p.parse = func(rule ...int) error {
 		r := 1
 		if len(rule) > 0 {
 			r = rule[0]
 		}
 		matches := p.rules[r]()
+{{if .Ast -}}
 		p.tokens32 = tree
+{{end -}}
 		if matches {
-			p.trim(tokenIndex)
+{{if .Ast -}}
+			p.Trim(tokenIndex)
+{{end -}}
 			return nil
 		}
 		return &parseError{p, max}
 	}
 
-	p.Reset = func() {
-		position, tokenIndex, depth = 0, 0, 0
-	}
-
 	add := func(rule pegRule, begin uint32) {
-		tree.Expand(tokenIndex)
-		tree.Add(rule, begin, position, depth, tokenIndex)
+{{if .Ast -}}
+		tree.Add(rule, begin, position, tokenIndex)
+{{end -}}
 		tokenIndex++
 		if begin != position && position > max.end {
-			max = token32{rule, begin, position, depth}
+			max = token32{rule, begin, position}
 		}
 	}
 
@@ -550,6 +435,7 @@ var TypeMap = [...]string{
 	"TypeRange",
 	"TypeString",
 	"TypePredicate",
+	"TypeStateChange",
 	"TypeCommit",
 	"TypeAction",
 	"TypePackage",
@@ -713,10 +599,11 @@ type Tree struct {
 	Rules      map[string]Node
 	rulesCount map[string]uint
 	node
-	inline, _switch bool
+	inline, _switch, Ast bool
+	Strict               bool
 
+	Generator       string
 	RuleNames       []Node
-	Sizes           [1]int
 	PackageName     string
 	Imports         []string
 	EndSymbol       rune
@@ -735,12 +622,14 @@ type Tree struct {
 	HasRange        bool
 }
 
-func New(inline, _switch bool) *Tree {
-	return &Tree{Rules: make(map[string]Node),
-		Sizes:      [1]int{32},
+func New(inline, _switch, noast bool) *Tree {
+	return &Tree{
+		Rules:      make(map[string]Node),
 		rulesCount: make(map[string]uint),
 		inline:     inline,
-		_switch:    _switch}
+		_switch:    _switch,
+		Ast:        !noast,
+	}
 }
 
 func (t *Tree) AddRule(name string) {
@@ -770,11 +659,11 @@ func (t *Tree) AddDoubleCharacter(text string) {
 }
 func (t *Tree) AddHexaCharacter(text string) {
 	hexa, _ := strconv.ParseInt(text, 16, 32)
-	t.PushFront(&node{Type: TypeCharacter, string: string(hexa)})
+	t.PushFront(&node{Type: TypeCharacter, string: string(rune(hexa))})
 }
 func (t *Tree) AddOctalCharacter(text string) {
 	octal, _ := strconv.ParseInt(text, 8, 8)
-	t.PushFront(&node{Type: TypeCharacter, string: string(octal)})
+	t.PushFront(&node{Type: TypeCharacter, string: string(rune(octal))})
 }
 func (t *Tree) AddPredicate(text string)   { t.PushFront(&node{Type: TypePredicate, string: text}) }
 func (t *Tree) AddStateChange(text string) { t.PushFront(&node{Type: TypeStateChange, string: text}) }
@@ -855,22 +744,39 @@ func escape(c string) string {
 	}
 }
 
-func (t *Tree) Compile(file string, out io.Writer) {
+func (t *Tree) Compile(file string, args []string, out io.Writer) (err error) {
 	t.AddImport("fmt")
-	t.AddImport("math")
+	if t.Ast {
+		t.AddImport("io")
+		t.AddImport("os")
+		t.AddImport("strings")
+	}
 	t.AddImport("sort")
 	t.AddImport("strconv")
 	t.EndSymbol = 0x110000
 	t.RulesCount++
 
+	t.Generator = strings.Join(args, " ")
+
+	var werr error
+	warn := func(e error) {
+		if werr == nil {
+			werr = fmt.Errorf("warning: %s.", e)
+		} else {
+			werr = fmt.Errorf("%s\nwarning: %s", werr, e)
+		}
+	}
+
 	counts := [TypeLast]uint{}
+	countsByRule := make([]*[TypeLast]uint, t.RulesCount)
 	{
 		var rule *node
-		var link func(node Node)
-		link = func(n Node) {
+		var link func(countsForRule *[TypeLast]uint, node Node)
+		link = func(countsForRule *[TypeLast]uint, n Node) {
 			nodeType := n.GetType()
 			id := counts[nodeType]
 			counts[nodeType]++
+			countsForRule[nodeType]++
 			switch nodeType {
 			case TypeAction:
 				n.SetId(int(id))
@@ -891,6 +797,7 @@ func (t *Tree) Compile(file string, out io.Writer) {
 
 				t.Rules[name] = emptyRule
 				t.RuleNames = append(t.RuleNames, emptyRule)
+				countsByRule = append(countsByRule, &[TypeLast]uint{})
 			case TypeName:
 				name := n.String()
 				if _, ok := t.Rules[name]; !ok {
@@ -904,6 +811,7 @@ func (t *Tree) Compile(file string, out io.Writer) {
 
 					t.Rules[name] = emptyRule
 					t.RuleNames = append(t.RuleNames, emptyRule)
+					countsByRule = append(countsByRule, &[TypeLast]uint{})
 				}
 			case TypePush:
 				copy, name := rule.Copy(), "PegText"
@@ -916,15 +824,16 @@ func (t *Tree) Compile(file string, out io.Writer) {
 
 					t.Rules[name] = emptyRule
 					t.RuleNames = append(t.RuleNames, emptyRule)
+					countsByRule = append(countsByRule, &[TypeLast]uint{})
 				}
 				n.PushBack(copy)
 				fallthrough
 			case TypeImplicitPush:
-				link(n.Front())
+				link(countsForRule, n.Front())
 			case TypeRule, TypeAlternate, TypeUnorderedAlternate, TypeSequence,
 				TypePeekFor, TypePeekNot, TypeQuery, TypeStar, TypePlus:
 				for _, node := range n.Slice() {
-					link(node)
+					link(countsForRule, node)
 				}
 			}
 		}
@@ -952,15 +861,21 @@ func (t *Tree) Compile(file string, out io.Writer) {
 				}
 			}
 		}
+		/* sort imports to satisfy gofmt */
+		sort.Strings(t.Imports)
+
 		/* second pass */
 		for _, node := range t.Slice() {
 			if node.GetType() == TypeRule {
 				rule = node
-				link(node)
+				counts := [TypeLast]uint{}
+				countsByRule[node.GetId()] = &counts
+				link(&counts, node)
 			}
 		}
 	}
 
+	usage := [TypeLast]uint{}
 	join([]func(){
 		func() {
 			var countRules func(node Node)
@@ -996,6 +911,13 @@ func (t *Tree) Compile(file string, out io.Writer) {
 					break
 				}
 			}
+			for id, reached := range ruleReached {
+				if reached {
+					for i, count := range countsByRule[id] {
+						usage[i] += count
+					}
+				}
+			}
 		},
 		func() {
 			var checkRecursion func(node Node) bool
@@ -1005,7 +927,7 @@ func (t *Tree) Compile(file string, out io.Writer) {
 				case TypeRule:
 					id := node.GetId()
 					if ruleReached[id] {
-						fmt.Fprintf(os.Stderr, "possible infinite left recursion in rule '%v'\n", node)
+						warn(fmt.Errorf("possible infinite left recursion in rule '%v'", node))
 						return false
 					}
 					ruleReached[id] = true
@@ -1121,7 +1043,7 @@ func (t *Tree) Compile(file string, out io.Writer) {
 						class := &node{Type: TypeUnorderedAlternate}
 						for d := 0; d < 256; d++ {
 							if properties[c].s.Has(uint64(d)) {
-								class.PushBack(&node{Type: TypeCharacter, string: string(d)})
+								class.PushBack(&node{Type: TypeCharacter, string: string(rune(d))})
 							}
 						}
 
@@ -1196,7 +1118,7 @@ func (t *Tree) Compile(file string, out io.Writer) {
 			}
 		}
 
-		for i, _ := range cache {
+		for i := range cache {
 			cache[i].reached = false
 		}
 		firstPass = false
@@ -1210,51 +1132,62 @@ func (t *Tree) Compile(file string, out io.Writer) {
 
 	var buffer bytes.Buffer
 	defer func() {
+		if t.Strict && werr != nil && err == nil {
+			// Treat warnings as errors.
+			err = werr
+		}
+		if !t.Strict && werr != nil {
+			// Display warnings.
+			fmt.Fprintln(os.Stderr, werr)
+		}
+		if err != nil {
+			return
+		}
 		fileSet := token.NewFileSet()
-		code, error := parser.ParseFile(fileSet, file, &buffer, parser.ParseComments)
-		if error != nil {
+		code, err := parser.ParseFile(fileSet, file, &buffer, parser.ParseComments)
+		if err != nil {
 			buffer.WriteTo(out)
-			fmt.Printf("%v: %v\n", file, error)
+			err = fmt.Errorf("%v: %v", file, err)
 			return
 		}
 		formatter := printer.Config{Mode: printer.TabIndent | printer.UseSpaces, Tabwidth: 8}
-		error = formatter.Fprint(out, fileSet, code)
-		if error != nil {
+		err = formatter.Fprint(out, fileSet, code)
+		if err != nil {
 			buffer.WriteTo(out)
-			fmt.Printf("%v: %v\n", file, error)
+			err = fmt.Errorf("%v: %v", file, err)
 			return
 		}
 
 	}()
 
 	_print := func(format string, a ...interface{}) { fmt.Fprintf(&buffer, format, a...) }
-	printSave := func(n uint) { _print("\n   position%d, tokenIndex%d, depth%d := position, tokenIndex, depth", n, n, n) }
-	printRestore := func(n uint) { _print("\n   position, tokenIndex, depth = position%d, tokenIndex%d, depth%d", n, n, n) }
-	printTemplate := func(s string) {
-		if error := template.Must(template.New("peg").Parse(s)).Execute(&buffer, t); error != nil {
-			panic(error)
-		}
+	printSave := func(n uint) { _print("\n   position%d, tokenIndex%d := position, tokenIndex", n, n) }
+	printRestore := func(n uint) { _print("\n   position, tokenIndex = position%d, tokenIndex%d", n, n) }
+	printTemplate := func(s string) error {
+		return template.Must(template.New("peg").Parse(s)).Execute(&buffer, t)
 	}
 
-	t.HasActions = counts[TypeAction] > 0
-	t.HasPush = counts[TypePush] > 0
-	t.HasCommit = counts[TypeCommit] > 0
-	t.HasDot = counts[TypeDot] > 0
-	t.HasCharacter = counts[TypeCharacter] > 0
-	t.HasString = counts[TypeString] > 0
-	t.HasRange = counts[TypeRange] > 0
+	t.HasActions = usage[TypeAction] > 0
+	t.HasPush = usage[TypePush] > 0
+	t.HasCommit = usage[TypeCommit] > 0
+	t.HasDot = usage[TypeDot] > 0
+	t.HasCharacter = usage[TypeCharacter] > 0
+	t.HasString = usage[TypeString] > 0
+	t.HasRange = usage[TypeRange] > 0
 
 	var printRule func(n Node)
-	var compile func(expression Node, ko uint)
+	var compile func(expression Node, ko uint) (labelLast bool)
 	var label uint
 	labels := make(map[uint]bool)
 	printBegin := func() { _print("\n   {") }
 	printEnd := func() { _print("\n   }") }
-	printLabel := func(n uint) {
+	printLabel := func(n uint) bool {
 		_print("\n")
 		if labels[n] {
 			_print("   l%d:\t", n)
+			return true
 		}
+		return false
 	}
 	printJump := func(n uint) {
 		_print("\n   goto l%d", n)
@@ -1336,13 +1269,13 @@ func (t *Tree) Compile(file string, out io.Writer) {
 			_print(">")
 		case TypeNil:
 		default:
-			fmt.Fprintf(os.Stderr, "illegal node type: %v\n", n.GetType())
+			warn(fmt.Errorf("illegal node type: %v", n.GetType()))
 		}
 	}
-	compile = func(n Node, ko uint) {
+	compile = func(n Node, ko uint) (labelLast bool) {
 		switch n.GetType() {
 		case TypeRule:
-			fmt.Fprintf(os.Stderr, "internal error #1 (%v)\n", n)
+			warn(fmt.Errorf("internal error #1 (%v)", n))
 		case TypeDot:
 			_print("\n   if !matchDot() {")
 			/*print("\n   if buffer[position] == endSymbol {")*/
@@ -1393,13 +1326,24 @@ func (t *Tree) Compile(file string, out io.Writer) {
 			nodeType, rule := element.GetType(), element.Next()
 			printBegin()
 			if nodeType == TypeAction {
-				_print("\nadd(rule%v, position)", rule)
+				if t.Ast {
+					_print("\nadd(rule%v, position)", rule)
+				} else {
+					// There is no AST support, so inline the rule code
+					_print("\n%v", element)
+				}
 			} else {
 				_print("\nposition%d := position", ok)
-				_print("\ndepth++")
 				compile(element, ko)
-				_print("\ndepth--")
-				_print("\nadd(rule%v, position%d)", rule, ok)
+				if n.GetType() == TypePush && !t.Ast {
+					// This is TypePush and there is no AST support,
+					// so inline capture to text right here
+					_print("\nbegin := position%d", ok)
+					_print("\nend := position")
+					_print("\ntext = string(buffer[begin:end])")
+				} else {
+					_print("\nadd(rule%v, position%d)", rule, ok)
+				}
 			}
 			printEnd()
 		case TypeAlternate:
@@ -1418,7 +1362,7 @@ func (t *Tree) Compile(file string, out io.Writer) {
 			}
 			compile(elements[len(elements)-1], ko)
 			printEnd()
-			printLabel(ok)
+			labelLast = printLabel(ok)
 		case TypeUnorderedAlternate:
 			done, ok := ko, label
 			label++
@@ -1441,18 +1385,20 @@ func (t *Tree) Compile(file string, out io.Writer) {
 					_print(" '%s'", escape(character.String()))
 				}
 				_print(":")
-				compile(sequence, done)
-				_print("\nbreak")
+				if compile(sequence, done) {
+					_print("\nbreak")
+				}
 			}
 			_print("\n   default:")
-			compile(last, done)
-			_print("\nbreak")
+			if compile(last, done) {
+				_print("\nbreak")
+			}
 			_print("\n   }")
 			printEnd()
-			printLabel(ok)
+			labelLast = printLabel(ok)
 		case TypeSequence:
 			for _, element := range n.Slice() {
-				compile(element, ko)
+				labelLast = compile(element, ko)
 			}
 		case TypePeekFor:
 			ok := label
@@ -1484,7 +1430,7 @@ func (t *Tree) Compile(file string, out io.Writer) {
 			printLabel(qko)
 			printRestore(qko)
 			printEnd()
-			printLabel(qok)
+			labelLast = printLabel(qok)
 		case TypeStar:
 			again := label
 			label++
@@ -1514,8 +1460,9 @@ func (t *Tree) Compile(file string, out io.Writer) {
 			printEnd()
 		case TypeNil:
 		default:
-			fmt.Fprintf(os.Stderr, "illegal node type: %v\n", n.GetType())
+			warn(fmt.Errorf("illegal node type: %v", n.GetType()))
 		}
+		return labelLast
 	}
 
 	/* lets figure out which jump labels are going to be used with this dry compile */
@@ -1548,7 +1495,9 @@ func (t *Tree) Compile(file string, out io.Writer) {
 	} else if length > math.MaxUint8 {
 		t.PegRuleType = "uint16"
 	}
-	printTemplate(pegHeaderTemplate)
+	if err = printTemplate(pegHeaderTemplate); err != nil {
+		return err
+	}
 	for _, element := range t.Slice() {
 		if element.GetType() != TypeRule {
 			continue
@@ -1556,7 +1505,7 @@ func (t *Tree) Compile(file string, out io.Writer) {
 		expression := element.Front()
 		if implicit := expression.Front(); expression.GetType() == TypeNil || implicit.GetType() == TypeNil {
 			if element.String() != "PegText" {
-				fmt.Fprintf(os.Stderr, "rule '%v' used but not defined\n", element)
+				warn(fmt.Errorf("rule '%v' used but not defined", element))
 			}
 			_print("\n  nil,")
 			continue
@@ -1567,7 +1516,7 @@ func (t *Tree) Compile(file string, out io.Writer) {
 		printRule(element)
 		_print(" */")
 		if count, ok := t.rulesCount[element.String()]; !ok {
-			fmt.Fprintf(os.Stderr, "rule '%v' defined but not used\n", element)
+			warn(fmt.Errorf("rule '%v' defined but not used", element))
 			_print("\n  nil,")
 			continue
 		} else if t.inline && count == 1 && ko != 0 {
@@ -1589,5 +1538,7 @@ func (t *Tree) Compile(file string, out io.Writer) {
 		_print("\n  },")
 	}
 	_print("\n }\n p.rules = _rules")
+	_print("\n return nil")
 	_print("\n}\n")
+	return nil
 }

Debdiff

[The following lists of changes regard files as different if they have different names, permissions or owners.]

Files in second set of .debs but not in first

-rw-r--r--  root/root   /usr/share/gocode/src/github.com/pointlander/peg/build.go
-rw-r--r--  root/root   /usr/share/gocode/src/github.com/pointlander/peg/buildinfo.go
-rw-r--r--  root/root   /usr/share/gocode/src/github.com/pointlander/peg/go.mod
-rw-r--r--  root/root   /usr/share/gocode/src/github.com/pointlander/peg/go.sum
-rw-r--r--  root/root   /usr/share/gocode/src/github.com/pointlander/peg/peg.peg.go
-rw-r--r--  root/root   /usr/share/gocode/src/github.com/pointlander/peg/tree/peg.go

Files in first set of .debs but not in second

-rw-r--r--  root/root   /usr/share/gocode/src/github.com/pointlander/peg/peg.go

No differences were encountered between the control files of package golang-github-pointlander-peg-dev

Control files of package peg-go: lines which differ (wdiff format)

  • Built-Using: golang-1.19 golang-1.17 (= 1.19.8-2), 1.17.8-1), golang-github-pointlander-compress (= 1.1.0-7~jan+lint1), 1.1.0-6), golang-github-pointlander-jetset (= 1.0.0-4)

More details

Full run details