New Upstream Snapshot - r-cran-zeligchoice

Ready changes

Summary

Merged new upstream version: 5.1.5+git20201212.1.f16809b (was: 0.9-6).

Resulting package

Built on 2022-10-22T19:52 (took 22m27s)

The resulting binary packages can be installed (if you have the apt repository enabled) by running one of:

apt install -t fresh-snapshots r-cran-zeligchoice

Diff

diff --git a/DESCRIPTION b/DESCRIPTION
index a665c98..2a68748 100644
--- a/DESCRIPTION
+++ b/DESCRIPTION
@@ -1,36 +1,68 @@
-Package: ZeligChoice
-License: GPL (>= 3)
-Title: Zelig Choice Models
+Package: Zelig
+License: GPL (>=3)
+Title: Everyone's Statistical Software
 Authors@R: c(
-    person("Christine.", "Choirat", role = "aut"),
-    person("Christopher", "Gandrud", email = "zelig.zee@gmail.com",
-    role = c("aut", "cre")),
+    person("Christine", "Choirat", role = "aut"),
+    person("Christopher", "Gandrud", role = "aut"),
     person("James", "Honaker", role = "aut"),
     person("Kosuke", "Imai", role = "aut"),
     person("Gary", "King", role = "aut"),
-    person("Olivia", "Lau", role = "aut")
+    person("Olivia", "Lau", role = "aut"),
+    person("Robert", "Treacy", email = "zelig.zee@gmail.com", role = c("aut", "cre")),
+    person("IQSS", "Harvard University", role = "cph")
     )
-Description: Add-on package for Zelig 5. Enables the use of a variety of logit
-    and probit regressions.
-URL: https://cran.r-project.org/package=ZeligChoice
+Description: A framework that brings together an abundance of common
+    statistical models found across packages into a unified interface, and
+    provides a common architecture for estimation and interpretation, as well
+    as bridging functions to absorb increasingly more models into the
+    package. Zelig allows each individual package, for each
+    statistical model, to be accessed by a common uniformly structured call and
+    set of arguments. Moreover, Zelig automates all the surrounding building
+    blocks of a statistical work-flow--procedures and algorithms that may be
+    essential to one user's application but which the original package
+    developer did not use in their own research and might not themselves
+    support. These include bootstrapping, jackknifing, and re-weighting of data.
+    In particular, Zelig automatically generates predicted and simulated
+    quantities of interest (such as relative risk ratios, average treatment
+    effects, first differences and predicted and expected values) to interpret
+    and visualize complex models.
+URL: https://cran.r-project.org/package=Zelig
 BugReports: https://github.com/IQSS/Zelig/issues
-Version: 0.9-6
-Date: 2017-06-07
-Imports: dplyr, Formula, jsonlite, MASS, methods, VGAM, Zelig (>=
-        5.1-1),
-Suggests: testthat, knitr, zeligverse
-Collate: 'model-mlogit.R' 'model-obinchoice.R' 'model-oprobit.R'
-        'model-ologit.R' 'model-bbinchoice.R' 'model-bprobit.R'
-        'model-blogit.R' 'create-json.R'
-RoxygenNote: 6.0.1
+Version: 5.1.7
+Date: 2020-12-03
+Depends: survival
+Imports: AER, Amelia, coda, dplyr (>= 0.3.0.2), Formula, geepack,
+        jsonlite, sandwich, MASS, MatchIt, maxLik, MCMCpack, methods,
+        quantreg, survey, VGAM
+Suggests: ei, eiPack, knitr, networkD3, optmatch, rmarkdown, testthat
+Collate: 'assertions.R' 'model-zelig.R' 'model-timeseries.R'
+        'model-ma.R' 'model-ar.R' 'model-arima.R' 'model-weibull.R'
+        'model-tobit.R' 'model-bayes.R' 'model-tobit-bayes.R'
+        'model-glm.R' 'model-binchoice.R' 'model-probit.R'
+        'model-probit-bayes.R' 'model-poisson.R'
+        'model-poisson-bayes.R' 'model-oprobit-bayes.R'
+        'model-normal.R' 'model-normal-bayes.R' 'model-mlogit-bayes.R'
+        'model-gamma.R' 'model-gee.R' 'model-logit.R'
+        'model-logit-bayes.R' 'model-factor-bayes.R'
+        'model-poisson-gee.R' 'model-normal-gee.R' 'model-gamma-gee.R'
+        'model-binchoice-gee.R' 'model-probit-gee.R'
+        'model-logit-gee.R' 'model-relogit.R' 'model-quantile.R'
+        'model-lognorm.R' 'model-exp.R' 'model-negbinom.R'
+        'model-ivreg.R' 'model-ls.R' 'utils.R' 'create-json.R'
+        'datasets.R' 'interface.R' 'model-survey.R'
+        'model-binchoice-survey.R' 'model-gamma-survey.R'
+        'model-logit-survey.R' 'model-normal-survey.R'
+        'model-poisson-survey.R' 'model-probit-survey.R' 'plots.R'
+        'wrappers.R'
+RoxygenNote: 7.1.1
 NeedsCompilation: no
-Packaged: 2017-06-07 15:07:31 UTC; cgandrud
-Author: Christine. Choirat [aut],
-  Christopher Gandrud [aut, cre],
+Packaged: 2022-10-22 19:35:37 UTC; root
+Author: Christine Choirat [aut],
+  Christopher Gandrud [aut],
   James Honaker [aut],
   Kosuke Imai [aut],
   Gary King [aut],
-  Olivia Lau [aut]
-Maintainer: Christopher Gandrud <zelig.zee@gmail.com>
-Repository: CRAN
-Date/Publication: 2017-06-07 22:44:09 UTC
+  Olivia Lau [aut],
+  Robert Treacy [aut, cre],
+  IQSS Harvard University [cph]
+Maintainer: Robert Treacy <zelig.zee@gmail.com>
diff --git a/MD5 b/MD5
deleted file mode 100644
index 827d372..0000000
--- a/MD5
+++ /dev/null
@@ -1,37 +0,0 @@
-f8bb0b7b780d5f5ec2953c629f8795e5 *DESCRIPTION
-f692b762adcbee68f1a35e850de75eb8 *NAMESPACE
-e64144e43fa2617e23005ffddba90cf4 *NEWS.md
-d2c074767923db20198a3844dcbb58f1 *R/create-json.R
-d9b2783702afd531aa0747650dda0884 *R/model-bbinchoice.R
-ce21e34cc591a2c0d1e1ddc8d523de35 *R/model-blogit.R
-49342bd27a805e11bac6ce0265847fb0 *R/model-bprobit.R
-3642b5ba772dd4300f60073789fe8cb3 *R/model-mlogit.R
-f805cf64c22189e613f0e1f81ea40bb7 *R/model-obinchoice.R
-3a17d0ec6f159dcc527e0bc568d357a2 *R/model-ologit.R
-be209e0b0519d12875e40225448be9e7 *R/model-oprobit.R
-0933d5594c16b1c6e51d27fa4d82efb1 *data/coalition.tab
-d9d6edeebc11b21f6f26ce2dd6db3352 *data/sanction.tab
-a1c355fda6c37fa1dbc5a5cb32a718e6 *demo/00Index
-070cdf9de6d3a4ea33fb2553260b3dc7 *demo/demo-blogit.R
-fdbaa08f66bc4b23f02b6668c39bbef6 *demo/demo-bprobit.R
-137a4b9e6d40afddc6bfb3bfbbed9953 *demo/demo-mlogit.R
-94f3ea473599ba6c9572d682d3ae0ffb *demo/demo-ologit.R
-e3e9ae3e4e890bf0770a340dbbbe1379 *demo/demo-oprobit.R
-9b56524740a74f40986424685b899f4a *inst/JSON/zelig5choicemodels.json
-6152355c61c471afe99457ead97b8aaf *man/Zelig-bbinchoice-class.Rd
-6fd039304e6a5f932c7da457f68e7114 *man/Zelig-blogit-class.Rd
-ff81a597ccd1a4c575e857fa7dc05d9b *man/Zelig-bprobit-class.Rd
-3a7f04bb9e9b137c5ee9d15d7503eb51 *man/Zelig-mlogit-class.Rd
-563c3cac6df5c503225375ffb981fed5 *man/Zelig-obinchoice-class.Rd
-aa81d7167a88039cf024013a88461bce *man/Zelig-ologit-class.Rd
-196e7ba712897f10f02303dea07e5634 *man/Zelig-oprobit-class.Rd
-3b01d1373c2b9f311a70f150f8a3e7cf *man/coalition.Rd
-7dacee6dd559c77c4bca0415690fb4b3 *man/construct.v.Rd
-575cc55096d53792a66814fd69ecb31a *man/createJSONzeligchoice.Rd
-ec6443ee11736e8d959b4233bb476c92 *man/ev.mlogit.Rd
-8d44a01b87dd6b7c012bc775ddfb58b6 *man/pv.mlogit.Rd
-685e8fe4738e2aad2ad73d7f2388570b *man/sanction.Rd
-eb1b000459ba2709b8007c0cb9fd9274 *tests/testthat.R
-1127751289432698fb8c087bc0ebdfd3 *tests/testthat/test-mlogit.R
-c6082c7f29e4b51ab4158e9fdb759f07 *tests/testthat/test-ologit.R
-524dd413c622bf5c114757b3f05f1843 *tests/testthat/test-oprobit.R
diff --git a/NAMESPACE b/NAMESPACE
old mode 100644
new mode 100755
index b2bf2ee..1ca3bb4
--- a/NAMESPACE
+++ b/NAMESPACE
@@ -1,21 +1,61 @@
-import(methods, Zelig, jsonlite, dplyr)
-
-importFrom("MASS", "polr", "mvrnorm")
-importFrom("VGAM", "vglm", "binom2.or", "binom2.rho", "constraints", "constraints.vlm")
-importFrom("stats", "runif", "rlogis", "plogis")
+import(sandwich, methods, survival, jsonlite, dplyr,
+       geepack, coda, Amelia, MatchIt, maxLik, survey)
 
+importFrom("AER", "tobit", "ivreg")
 importFrom("Formula", "as.Formula")
-
+importFrom("grDevices", "col2rgb", "heat.colors", "rgb")
+importFrom("graphics", "abline", "axis", "barplot", "box", "image",
+             "layout", "lines", "par", "polygon", "text")
+importFrom("stats", "binomial", "complete.cases", "density",
+           "glm", "lm", "lm.influence", "median", "model.frame",
+             "model.matrix", "model.response", "na.omit", "quantile",
+             "sd", "terms", "update", "ARMAacf", "rnorm", "pnorm")
+importFrom("MASS", "glm.nb", "rnegbin", "mvrnorm", "gamma.shape")
+importFrom("MCMCpack", "MCMCfactanal", "MCMClogit", "MCMCmnl", "MCMCregress",
+           "MCMCoprobit", "MCMCpoisson", "MCMCprobit", "MCMCtobit")
+importFrom("quantreg", "rq", "summary.rq", "bandwidth.rq")
+importFrom("VGAM", "vglm")
 importClassesFrom("VGAM", "vglm")
 importMethodsFrom("VGAM", "coef", "fitted", "predict", "vcov")
 
+
+S3method(summary, Arima)
+
 exportPattern("^[[:alpha:]]+")
 exportClasses(
-     "Zelig-bbinchoice",
-     "Zelig-blogit",
-     "Zelig-bprobit",
-     "Zelig-mlogit",
-     "Zelig-obinchoice",
-     "Zelig-ologit",
-     "Zelig-oprobit"
-)
\ No newline at end of file
+     "Zelig",
+     "Zelig-ls",
+     "Zelig-glm",
+     "Zelig-ivreg",
+     "Zelig-binchoice",
+     "Zelig-logit",
+     "Zelig-probit",
+     "Zelig-gamma",
+     "Zelig-exp",
+     "Zelig-negbin",
+     "Zelig-normal",
+     "Zelig-poisson",
+     "Zelig-lognorm",
+     "Zelig-tobit",
+     "Zelig-gee",
+     "Zelig-binchoice-gee",
+     "Zelig-logit-gee",
+     "Zelig-probit-gee",
+     "Zelig-gamma-gee",
+     "Zelig-normal-gee",
+     "Zelig-poisson-gee",
+     "Zelig-bayes",
+     "Zelig-factor-bayes",
+     "Zelig-logit-bayes",
+     "Zelig-mlogit-bayes",
+     "Zelig-normal-bayes",
+     "Zelig-oprobit-bayes",
+     "Zelig-poisson-bayes",
+     "Zelig-probit-bayes",
+     "Zelig-tobit-bayes",
+     "Zelig-weibull",
+     "Zelig-timeseries",
+     "Zelig-arima",
+     "Zelig-ar",
+     "Zelig-ma"
+)
diff --git a/NEWS.md b/NEWS.md
index 94d3c18..86bf6b5 100644
--- a/NEWS.md
+++ b/NEWS.md
@@ -1,37 +1,367 @@
-ZeligChoice version 0.9-6
-===============================
+> All changes to Zelig are documented here. GitHub issue numbers are given after
+each change note when relevant. See <https://github.com/IQSS/Zelig/issues>.
+External contributors are referenced with their GitHub usernames when
+applicable.
 
-- Test added to assess Zelig 5.1-2's getters with `mlogit` 
-estimated models.
+Zelig version 5.1.7
+==============================
 
-- Allow users to pass `weights` to estimation models. Zelig/#250
+## Major changes
 
-ZeligChoice version 0.9-5
-===============================
+- Removed zeligverse, ZeligChoice, so as to get back up on CRAN.
 
-- Minor changes for compatibility with Zelig 5.1-0.
+- New maintainer @rtreacy (inheriting same contact email).
 
-ZeligChoice version 0.9-4
-===============================
+## Minor changes
 
-- Requires Zelig version 5.0-16, resolving plotting regression.
+- Updates of documentation and external links to meet CRAN checks.
 
-- Solved deep assignment issue that returned a series of warnings on build.
-#Zelig/172
+- Removal of some tests.
+
+Zelig version 5.1.6
+==============================
+
+## Major changes
+
+- All Zelig time series models are deprecated.
+
+## Minor changes
+
+- `predit`, `fitted`, `residuals` now accept arguments. #320
+
+Zelig version 5.1.5
+==============================
+
+++++ All Zelig time series models will be deprecated on 1 February 2018 ++++
+
+
+## Bug fixes
+
+-   Resolved an issue where `odds_ratios` standard errors were not correctly
+returned for `logit` and `relogit` models. Thanks to @retrography. #302
+
+-   Zelig 4 compatability wrappers now work for `arima` models. Thanks to
+@mbsabath. #280
+
+-   Resolved an error when only `setx` was called with `arima` models Thanks to
+@mbsabath. #299
+
+-   Resolved an error when `summary` was called after `sim` for `arima` models.
+#305
+
+-   Resolved an error when `sim` is used with differenced first-order
+autoregressive models. #307
+
+-   `arima` models return informative error when `data` is not found. #308
+
+## Minor
+
+-   Compatibility with testthat 2.0.0
+
+-   Documentation updated to correctly reflect that `tobit` wraps `AER::tobit`.
+#315
+
+-   Package terminology in documentation corrected. #316
+
+Zelig version 5.1-4
+==============================
+
+## Major changes
+
+-   Speed improvements made to `relogit`. Thanks to @retrography. #88
+
+-   Returns `relogit` weighted case control method to that described in
+King and Langche (2001, eq. 11) and used in the Stata `relogit` implementation.
+#295
+
+-   Odds ratios now returned from `summary` with `relogit` models via the
+`odds_ratios = TRUE` argument. #302
+
+Zelig version 5.1-3
+==============================
+
+## Major changes
+
+-   Roxygen documentation improvements.
+
+## Minor changes and bug fixes
+
+-   `zquantile` with Amelia imputed data now working. #277
+
+-   `vcov` now works with `rq` quantile regression models.
+
+-   More informative error handling for conflicting `timeseries` model
+arguments. #283
+
+-   Resolved and issue with `relogit` that produced a warning when the fitted
+model object was passed to `predict`. #291
+
+
+Zelig version 5.1-2
+==============================
+
+## Major changes
+
+-   !EXPERIMENTAL! interface function `to_zelig` allows users to convert fitted model
+objects fitted outside of Zelig to a Zelig object. The function is called
+within the `setx` wrapper if a non-Zelig object is supplied. Currently
+only works for models fitted with `lm` and many estimated with `glm` and
+`svyglm`. #189
+
+-   `get_se` and `get_pvalue` function wrappers created for `get_se` and
+`get_pvalue` methods, respectively. #269
+
+-   If `combine_coef_se` is given a model estimated without multiply imputed
+data or bootstraps, an error is no longer returned. Instead a list of the
+models' untransformed coefficients, standard errors, and p-values is returned. #268
+
+-   `summary` for `logit` models now accepts the argument `odds_ratios`. When
+`TRUE` odds ratio estimates are returned rather than coefficient estimates.
+Thanks to Adam Obeng. PR/#270.
+
+- `setx` and `sim` fail informatively when passed ZeligEI objects. #271
+
+## Minor changes and bug fixes
+
+-   Resolved a bug where `weights` were not being passed to `svydesign`
+in survey models. #258
+
+-   Due to limited functionality and instability, zelig survey estimations
+no return a warning and a link to documentation on how to use `to_survey`
+via `setx` to bipass `zelig`. #273
+
+-   Resolved a bug where `from_zelig_model` would not extract fitted model
+objects for models estimated using `vglm`. #265
+
+-   `get_pvalue` and `get_se` now work for models estimated using `vglm`. #267
+
+-   Improved `ivreg`, `mlogit`, and getter (#266) documentation.
+
+Zelig version 5.1-1
+==============================
+
+## Minor changes
+
+-   Average Treatment Effect on the Treated (ATT) vignette added to the online
+documentation <http://docs.zeligproject.org/articles/att.html>
+
+-   Corrected vignette URLs.
+
+
+Zelig version 5.1-0
+==============================
+
+## Major changes
+
+-   Introduce a new model type for instrumental-variable regression: `ivreg`
+based on the `ivreg` from the AER package. #223
+
+-   Use the Formula package for formulas. This will enable a common syntax for
+multiple equations, though currently in Core Zelig it is only
+enhances `ivreg`. #241
+
+-   `zelig` calls now support `update`ing formulas (#244) and `.` syntax for
+inserting all variables from `data` on the right-hand side of the formula
+#87. See also #247.
+
+-   Arbitrary `log` transformations are now supported in `zelig` calls
+(exept for `ivreg` regressors). #225
+
+-   Arbitrary `as.factor` and `factor` transformations are now supported in
+`zelig` calls.
+
+-   Restored quantile regression (`model = "rq"`). Currently only supports one
+`tau` at a time. #255
+
+-   Added `get_qi` wrapper for `get_qi` method.
+
+-   Added `ATT` wrapper for `ATT` method.
+
+-   `gee` models can now be estimated with multiply imputed data. #263
+
+## Minor changes and bug fixes
+
+-   `zelig` returns an error if `weights` are specified in a model estimated
+with multiply imputed data. (not possible before, but uninformative error
+returned)
+
+-   Code improvement to `factor_coef_combine` so it does not return a warning
+for model types with more than 1 declared class.
+
+-   Reorganize README files to meet new CRAN requirements.
+
+-   Switch `bind_rows` for `rbind_all` in `zquantile` as the latter is depricated.
+#255
+
+-   Depends on the survival package in order to enable `setx` for exponential
+models without explicitly loading survival. #254
+
+-   `relogit` now only accepts one `tau` per call (similar to `quantile`). Fixed
+to address #257.
 
 - Additional unit tests.
 
+Zelig version 5.0-17
+==============================
+
+## Major changes
+
+-   New function `combine_coef_se` takes as input a `zelig` model estimated
+using multiply imputed data or bootstrapping and returns a list of coefficients,
+standard errors, z-values, and p-values combined across the estimations. Thanks
+to @vincentarelbundock for prompting. #229
+
+-   The following changes were primarily made to re-established Zelig integration
+with [WhatIf](https://CRAN.R-project.org/package=WhatIf). #236
+
+    + Added `zelig_setx_to_df` for extracted fitted values created by `setx`.
+
+    + Fitted factor level variable values are returned in a single column (not
+by parameter level) by `zelig_qi_to_df`.
+
+-   `setrange` (including `setx` used with a range of fitted values) now creates
+scenarios based on matches of equal length set ranges. This enables `setx` to
+work with polynomials, splines, etc. (currently only when these are created
+outside of the `zelig` call). #238
+
+## Minor changes and bug fixes
+
+-   Resolve a bug where appropriate `plot`s were not created for `mlogitbayes`. #206
+
+-   Arguments (such as `xlab`) can now be passed to `plot`. #237
+
+-   `zelig_qi_to_df` and `qi_slimmer` bug with multinomial response models
+resolved. #235
+
+-   Resolved a bug where `coef`, `coefficients`, `vcov`, `fitted`, and `predict`
+returned errors. Thanks to @vincentarelbundock for initially reporting. #231
+
+-   Reduced number of digits show from `summary` for fitted model objects.
+
+
+
+Zelig version 5.0-16
+==============================
+
+## Major changes
+
+-   !! Breaking change !! the `get*` functions (e.g. `getcoef`) now use
+underscores `_` to delimit words in the function names (e.g. `get_coef`). #214
+
+-   Added a number of new "getter" methods for extracting estimation elements:
+
+    + `get_names` method to return Zelig object field names. Also available via
+  `names`. #216
+
+    + `get_residuals` to extract fitted model residuals. Also available via
+  `residuals`.
+
+    + `get_df_residuals` method to return residual degrees-of-freedom.
+  Also accessible via `df.residuals`.
+
+    + `get_model_data` method to return the data frame used to estimate the
+  original model.
+
+    + `get_pvalue` and `get_se` methods to return estimated model p-values and
+  standard errors. Thank you to @vincentarelbundock for contributions. #147
+
+-   `zelig_qi_to_df` function for extracting simulated quantities of interest
+from a Zelig object and returning them as a tidy-formatted data frame. #189
+
+-   `setx` returns an error if it is unable to find a supplied variable name.
+
+-   `setx1` wrapper added to facilitate piped workflows for first differences.
+
+-   `zelig` can handle independent variables that are transformed using the
+natural logarithm inside of the call. #225
+
+## Minor changes and bug fixes
+
+-   Corrected an issue where `plot` would tend to choose a factor level as the
+x-axis variable when plotting a range of simulations. #226
+
+-   If a factor level variable's fitted value is not specified in `setx` and
+it is multi-modal, the last factor in the factor list is arbitrarily chosen.
+This replaces previous behavior where the level was randomly chosen, causing
+unuseful quantity of interest range plots. #226
+
+-   Corrected a bug where `summary` for ranges of `setx` would only show the
+first scenario. Now all scenarios are shown. #226
+
+-   Corrected a bug where the README.md was not included in the CRAN build.
+
+-   `to_zelig_mi` now can accept a list of data frames. Thanks to
+@vincentarelbundock.
+
+-   Internal code improvements.
+
+
+Zelig version 5.0-15
+==============================
+
+## Major changes
+
+-   Allows users to convert an independent variable to a factor within a `zelig`
+call using `as.factor`. #213
+
+-   `from_zelig_model` function to extract original fitted model objects from
+`zelig` estimation calls. This is useful for conducting non-Zelig supported
+post-estimation and easy integration with the texreg and stargazer packages
+for formatted parameter estimate tables. #189
+
+-   Additional MC tests for a wide range of models. #160
+
+## Minor changes
+
+-   Solved deep assignment issue that returned a series of warnings on build. #172
+
+## Bug fixes
+
+-   Resolves a bug from `set` where `sim` would fail for models that included
+factor level independent variables. #156
+
+-   Fixed an issue with `model-survey` where `ids` was hard coded as `~1`. #144
+
+-   Fixed `ATT` bug introduced in 5.0-14. #194
+
+-   Fixed `ci.plot` bug with `timeseries` models introduced in 5.0-15. #204
+
+
+Zelig version 5.0-14
+==============================
+
+## Major changes
+
+-   `mode` has been deprecated. Please use `Mode`. #152
+
+-   The Zelig 4 `sim` wrapper now intelligently looks for fitted values from the
+reference class object if not supplied via the x argument.
+
+-   New `to_zelig_mi` utility function for combining multiply imputed data sets
+for passing to `zelig`. `mi` will also work to enable backwards compatibility. #178
+
+-   Initial development on a new testing architecture and more tests for
+`model-*`, Zelig 4 wrappers, `ci.plot`, and the Zelig workflow.
+
+-   `graph` method now accepts simulations from `setx` and `setrange`. For the
+former it uses `qi.plot` and `ci.plot` for the latter.
+
+-   Improved error messages for Zelig 4 wrappers.
+
+-   Improved error messages if Zelig methods are supplied with too little
+information.
+
+-   `model-arima` now fails if the dependent variable does not vary for one of the
+cases.
 
-ZeligChoice version 0.9-3
-===============================
+## Minor changes
 
-- README dynamically generated example.
+-   Minor documentation improvements for Zelig 4 wrappers.
 
-- Solve bug where `model-mlogit` wouldn't create simulations due to a missing
-reference level. #15
+-   Dynamically generated README.md.
 
+-   Removed plyr package dependency.
 
-ZeligChoice version 0.9-2
-===============================
+-   `rbind_all` replaced by `bind_rows` as the former is deprecated by dplyr.
 
-- Resolves compatability issue with Zelig 5.0-14.
+-   Other internal code improvements
diff --git a/R/assertions.R b/R/assertions.R
new file mode 100644
index 0000000..e5348d2
--- /dev/null
+++ b/R/assertions.R
@@ -0,0 +1,174 @@
+#' Check if is a zelig object
+#' @param x an object
+#' @param fail logical whether to return an error if x is not a Zelig object.
+
+is_zelig <- function(x, fail = TRUE) {
+    is_it <- inherits(x, "Zelig")
+    if (isTRUE(fail)) {
+        if(!isTRUE(is_it)) stop('Not a Zelig object.', call. = FALSE)
+    } else return(is_it)
+}
+
+#' Check if uninitializedField
+#' @param x a zelig.out method
+#' @param msg character string with the error message to return if
+#'   \code{fail = TRUE}.
+#' @param fail logical whether to return an error if x uninitialzed.
+
+is_uninitializedField <- function(x,
+                                  msg = 'Zelig model has not been estimated.',
+                                  fail = TRUE) {
+    passes <- FALSE
+    if (length(x) == 1) passes <- inherits(x, "uninitializedField")
+
+    if (isTRUE(fail)) {
+        if (isTRUE(passes))
+            stop(msg, call. = FALSE)
+    } else return(passes)
+}
+
+#' Check if any simulations are present in sim.out
+#' @param x a sim.out method
+#' @param fail logical whether to return an error if no simulations are present.
+
+is_sims_present <- function(x, fail = TRUE) {
+    passes <- TRUE
+    if (is.null(x$x) & is.null(x$range)) passes <- FALSE
+    if (length(x) > 0) passes <- TRUE
+    if (isTRUE(fail)) {
+        if (!isTRUE(passes))
+            stop('No simulated quantities of interest found.', call. = FALSE)
+    } else return(passes)
+}
+
+#' Check if simulations for individual values are present in sim.out
+#' @param x a sim.out method
+#' @param fail logical whether to return an error if simulation range is not
+#'   present.
+
+is_simsx <- function(x, fail = TRUE) {
+    passes <- TRUE
+    if (is.null(x$x)) passes <- FALSE
+    if (isTRUE(fail)) {
+        if (!isTRUE(passes))
+            stop('Simulations for individual fitted values are not present.',
+                call. = FALSE)
+    } else return(passes)
+}
+
+#' Check if simulations for individual values for x1 are present
+#'   in sim.out
+#' @param x a sim.out method
+#' @param fail logical whether to return an error if simulation range is not
+#'   present.
+
+is_simsx1 <- function(x, fail = TRUE) {
+    passes <- TRUE
+    if (is.null(x$x1)) passes <- FALSE
+    if (isTRUE(fail)) {
+        if (!isTRUE(passes))
+            stop('Simulations for individual fitted values are not present.',
+                call. = FALSE)
+    } else return(passes)
+}
+
+#' Check if simulations for a range of fitted values are present in sim.out
+#' @param x a sim.out method
+#' @param fail logical whether to return an error if simulation range is not
+#'   present.
+
+is_simsrange <- function(x, fail = TRUE) {
+    passes <- TRUE
+    if (is.null(x$range)) passes <- FALSE
+    if (isTRUE(fail)) {
+        if (!isTRUE(passes))
+            stop('Simulations for a range of fitted values are not present.',
+                call. = FALSE)
+    } else return(passes)
+}
+
+#' Check if simulations for a range1 of fitted values are present in sim.out
+#' @param x a sim.out method
+#' @param fail logical whether to return an error if simulation range is not
+#'   present.
+
+is_simsrange1 <- function(x, fail = TRUE) {
+    passes <- TRUE
+    if (is.null(x$range1)) passes <- FALSE
+    if (isTRUE(fail)) {
+        if (!isTRUE(passes))
+            stop('Simulations for a range of fitted values are not present.',
+                call. = FALSE)
+    } else return(passes)
+}
+
+#' Check if an object has a length greater than 1
+#' @param x an object
+#' @param msg character string with the error message to return if
+#'   \code{fail = TRUE}.
+#' @param fail logical whether to return an error if length is not greater than
+#'   1.
+
+is_length_not_1 <- function(x, msg = 'Length is 1.', fail = TRUE) {
+    passes <- TRUE
+
+    if (length(x) == 1) passes <- FALSE
+    if (isTRUE(fail)) {
+        if (!isTRUE(passes))
+            stop(msg, call. = FALSE)
+    } else return(passes)
+}
+
+#' Check if the values in a vector vary
+#' @param x a vector
+#' @param msg character string with the error message to return if
+#'   \code{fail = TRUE}.
+#' @param fail logical whether to return an error if \code{x} does not vary.
+
+is_varying <- function(x, msg = 'Vector does not vary.', fail = TRUE) {
+    if (!is.vector(x)) stop('x must be a vector.', call. = FALSE)
+    passes <- TRUE
+
+    if (length(unique(x)) == 1) passes <- FALSE
+    if (isTRUE(fail)) {
+        if (!isTRUE(passes))
+            stop(msg, call. = FALSE)
+    } else return(passes)
+}
+
+#' Check if a zelig object contains a time series model
+#'
+#' @param x a zelig object
+#' @param msg character string with the error message to return if
+#'   \code{fail = TRUE}.
+#' @param fail logical whether to return an error if \code{x} is not a timeseries.
+
+is_timeseries <- function(x, msg = 'Not a timeseries object.', fail = FALSE) {
+    is_zelig(x)
+    passes <- TRUE
+    if (!"timeseries" %in% x$category) passes <- FALSE
+    if (isTRUE(fail)) {
+        if (!isTRUE(passes))
+            stop(msg, call. = FALSE)
+    } else return(passes)
+}
+
+#' Check if an object was created with ZeligEI
+#'
+#' @param x a zelig object
+#' @param msg character string with the error message to return if
+#'   \code{fail = TRUE}.
+#' @param fail logical whether to return an error if \code{x} is not a timeseries.
+
+is_zeligei <- function(x, msg = "Function is not relevant for ZeligEI objects.",
+                       fail = TRUE) {
+    is_zelig(x)
+    passes <- FALSE
+
+    pkgs <- attr(class(x), "package")
+    if ("ZeligEI" %in% pkgs) passes <- TRUE
+    if (isTRUE(fail)) {
+        if (isTRUE(passes))
+            stop(msg, call. = FALSE)
+    } else return(passes)
+}
diff --git a/R/create-json.R b/R/create-json.R
old mode 100644
new mode 100755
index 9ec8eb5..246a2d7
--- a/R/create-json.R
+++ b/R/create-json.R
@@ -1,43 +1,194 @@
-#' @include model-bbinchoice.R
-#' @include model-blogit.R
-#' @include model-bprobit.R
-#' @include model-ologit.R
-#' @include model-oprobit.R
-#' @include model-mlogit.R
+#' @include utils.R
+#' @include model-zelig.R
+#' @include model-ls.R
+#' @include model-glm.R
+#' @include model-ivreg.R
+#' @include model-binchoice.R
+#' @include model-logit.R
+#' @include model-probit.R
+#' @include model-poisson.R
+#' @include model-normal.R
+#' @include model-gamma.R
+#' @include model-negbinom.R
+#' @include model-exp.R
+#' @include model-lognorm.R
+#' @include model-tobit.R
+#' @include model-quantile.R
+#' @include model-relogit.R
+#' @include model-gee.R
+#' @include model-binchoice-gee.R
+#' @include model-logit-gee.R
+#' @include model-probit-gee.R
+#' @include model-gamma-gee.R
+#' @include model-normal-gee.R
+#' @include model-poisson-gee.R
+#' @include model-bayes.R
+#' @include model-factor-bayes.R
+#' @include model-logit-bayes.R
+#' @include model-mlogit-bayes.R
+#' @include model-normal-bayes.R
+#' @include model-oprobit-bayes.R
+#' @include model-poisson-bayes.R
+#' @include model-probit-bayes.R
+#' @include model-tobit-bayes.R
+#' @include model-weibull.R
+#' @include model-timeseries.R
+#' @include model-arima.R
+#' @include model-ar.R
+#' @include model-ma.R
 
 #library(jsonlite)
 
+createJSON <- function(movefile = TRUE){
 
-createJSONzeligchoice <- function(){
+  z5ls <- zls$new()
+  z5ls$toJSON()
 
-  z5blogit <- zblogit$new()
-  z5blogit$toJSON()
+  z5logit <- zlogit$new()
+  z5logit$toJSON()
 
-  z5bprobit <- zbprobit$new()
-  z5bprobit$toJSON()
+  z5ivreg <- zivreg$new()
+  z5ivreg$toJSON()
 
-  z5mlogit <- zmlogit$new()
-  z5mlogit$toJSON()
+  z5probit <- zprobit$new()
+  z5probit$toJSON()
 
-  z5ologit <- zologit$new()
-  z5ologit$toJSON()
+  z5poisson <- zpoisson$new()
+  z5poisson$toJSON()
 
-  z5oprobit <- zoprobit$new()
-  z5oprobit$toJSON()
+  z5normal <- znormal$new()
+  z5normal$toJSON()
 
-  zeligchoicemodels <- list(zelig5choicemodels = list("blogit" = z5blogit$ljson,
-                                                    "bprobit" = z5bprobit$ljson,
-                                                    "mlogit" = z5mlogit$ljson,
-                                                    "ologit" = z5ologit$ljson,
-                                                    "oprobit" = z5oprobit$ljson))
+  z5gamma <- zgamma$new()
+  z5gamma$toJSON()
 
-  # cat(jsonlite::toJSON(zeligchoicemodels, pretty = TRUE),
-  #     file = file.path("inst/JSON", "zelig5choicemodels.json"))
+  z5negbin <- znegbin$new()
+  z5negbin$toJSON()
 
-  cat(toJSON(zeligchoicemodels, pretty = TRUE), file = file.path("zelig5choicemodels.json"))
-  file.rename(from = file.path("zelig5choicemodels.json"),
-            to = file.path("inst", "JSON", "zelig5choicemodels.json"))
-  file.remove(file.path("zelig5choicemodels.json"))
+  z5exp <- zexp$new()
+  z5exp$toJSON()
 
+  z5lognorm <- zlognorm$new()
+  z5lognorm$toJSON()
+
+  z5tobit <- ztobit$new()
+  z5tobit$toJSON()
+
+  z5quantile <- zquantile$new()
+  z5quantile$toJSON()
+
+  z5relogit <- zrelogit$new()
+  z5relogit$toJSON()
+
+  z5logitgee <- zlogitgee$new()
+  z5logitgee$toJSON()
+
+  z5probitgee <- zprobitgee$new()
+  z5probitgee$toJSON()
+
+  z5gammagee <- zgammagee$new()
+  z5gammagee$toJSON()
+
+  z5normalgee <- znormalgee$new()
+  z5normalgee$toJSON()
+
+  z5poissongee <- zpoissongee$new()
+  z5poissongee$toJSON()
+
+  z5factorbayes <- zfactorbayes$new()
+  z5factorbayes$toJSON()
+
+  z5logitbayes <- zlogitbayes$new()
+  z5logitbayes$toJSON()
+
+  z5mlogitbayes <- zmlogitbayes$new()
+  z5mlogitbayes$toJSON()
+
+  z5normalbayes <- znormalbayes$new()
+  z5normalbayes$toJSON()
+
+  z5oprobitbayes <- zoprobitbayes$new()
+  z5oprobitbayes$toJSON()
+
+  z5poissonbayes <- zpoissonbayes$new()
+  z5poissonbayes$toJSON()
+
+  z5probitbayes <- zprobitbayes$new()
+  z5probitbayes$toJSON()
+
+  z5tobitbayes <- ztobitbayes$new()
+  z5tobitbayes$toJSON()
+
+  z5weibull <- zweibull$new()
+  z5weibull$toJSON()
+
+  z5logitsurvey <- zlogitsurvey$new()
+  z5logitsurvey$toJSON()
+
+  z5probitsurvey <- zprobitsurvey$new()
+  z5probitsurvey$toJSON()
+
+  z5gammasurvey <- zgammasurvey$new()
+  z5gammasurvey$toJSON()
+
+  z5normalsurvey <- znormalsurvey$new()
+  z5normalsurvey$toJSON()
+
+  z5poissonsurvey <- zpoissonsurvey$new()
+  z5poissonsurvey$toJSON()
+
+  z5arima <- zarima$new()
+  z5arima$toJSON()
+
+  z5ar <- zar$new()
+  z5ar$toJSON()
+
+  z5ma <- zma$new()
+  z5ma$toJSON()
+
+  zeligmodels <- list(zelig5models = list(
+                    "ls" = z5ls$ljson,
+                    "ivreg" = z5ivreg$ljson,
+                    "logit" = z5logit$ljson,
+                    "probit" = z5probit$ljson,
+                    "poisson" = z5poisson$ljson,
+                    "normal" = z5normal$ljson,
+                    "gamma" = z5gamma$ljson,
+                    "negbin" = z5negbin$ljson,
+                    "exp" = z5exp$ljson,
+                    "lognorm" = z5lognorm$ljson,
+                    "tobit" = z5tobit$ljson,
+                    "quantile" = z5quantile$ljson,
+                    "relogit" = z5relogit$ljson,
+                    "logitgee" = z5logitgee$ljson,
+                    "probitgee" = z5probitgee$ljson,
+                    "gammagee" = z5gammagee$ljson,
+                    "normalgee" = z5normalgee$ljson,
+                    "poissongee" = z5poissongee$ljson,
+                    "factorbayes" = z5factorbayes$ljson,
+                    "logitbayes" = z5logitbayes$ljson,
+                    "mlogitbayes" = z5mlogitbayes$ljson,
+                    "normalbayes" = z5normalbayes$ljson,
+                    "oprobitbayes" = z5oprobitbayes$ljson,
+                    "poissonbayes" = z5poissonbayes$ljson,
+                    "probitbayes" = z5probitbayes$ljson,
+                    "tobitbayes" = z5tobitbayes$ljson,
+                    "weibull" = z5weibull$ljson,
+                    "logitsurvey" = z5logitsurvey$ljson,
+                    "probitsurvey" = z5probitsurvey$ljson,
+                    "normalsurvey" = z5normalsurvey$ljson,
+                    "gammasurvey" = z5gammasurvey$ljson,
+                    "poissonsurvey" = z5poissonsurvey$ljson,
+                    "arima" = z5arima$ljson,
+                    "ma" = z5ma$ljson,
+                    "ar" = z5ar$ljson))
+
+  cat(toJSON(zeligmodels, pretty = TRUE), "\n",
+      file = file.path("zelig5models.json"))
+
+  if (movefile){
+    file.rename(from = file.path("zelig5models.json"),
+                to = file.path("inst", "JSON", "zelig5models.json"))
+  }
   return(TRUE)
-}
\ No newline at end of file
+}
diff --git a/R/datasets.R b/R/datasets.R
new file mode 100644
index 0000000..20a5b1c
--- /dev/null
+++ b/R/datasets.R
@@ -0,0 +1,11 @@
+#' Cigarette Consumption Panel Data
+#'
+#' @docType data
+#' @source From Christian Kleiber and Achim Zeileis (2008). Applied
+#' Econometrics with R. New York: Springer-Verlag. ISBN 978-0-387-77316-2. URL
+#' <https://CRAN.R-project.org/package=AER>
+#' @keywords datasets
+#' @md
+#' @format A data set with 96 observations and 9 variables
+#' @name CigarettesSW
+NULL
diff --git a/R/interface.R b/R/interface.R
new file mode 100644
index 0000000..397bb2d
--- /dev/null
+++ b/R/interface.R
@@ -0,0 +1,593 @@
+#' Instructions for how to convert non-Zelig fitted model objects to Zelig.
+#' Used in to_zelig
+model_lookup_df <- data.frame(
+    rbind(
+        c(class = "lm", family = "gaussian", link = "identity", zclass = "zls"),
+        c(class = "glm", family = "gaussian", link = "identity", zlcass = "zls"),
+        c(class = "glm", family = "binomial", link = "logit", zclass = "zlogit"),
+        c(class = "glm", family = "binomial", link = "probit", zclass = "zprobit"),
+        c(class = "glm", family = "poisson",  link = "log", zclass = "zpoisson"),
+        c(class = "glm", family = "Gamma", link = "inverse", zclass = "zgamma"),
+        c(class = "svyglm", family = "gaussian", link = "identity", zclass = "znormalsurvey"),
+        c(class = "svyglm", family = "binomial", link = "logit", zclass = "zlogitsurvey"),
+        c(class = "svyglm", family = "quasibinomial", link = "logit", zclass = "zlogitsurvey")),
+    stringsAsFactors = FALSE)
+
+#' Coerce a non-Zelig fitted model object to a Zelig class object
+#'
+#' @param obj a fitted model object fitted using \code{lm} and many using
+#'    \code{glm}. Note: more intended in future Zelig releases.
+#'
+#' @examples
+#' library(dplyr)
+#' lm.out <- lm(Fertility ~ Education, data = swiss)
+#'
+#' z.out <- to_zelig(lm.out)
+#'
+#' # to_zelig called from within setx
+#' setx(z.out) %>% sim() %>% plot()
+#'
+#' @author Christopher Gandrud and Ista Zhan
+#' @importFrom dplyr group_by_ %>% do
+#' @export
+
+to_zelig <- function(obj) {
+    message('to_zelig is an experimental function.\n  Please report issues to: https://github.com/IQSS/Zelig/issues\n')
+    not_found_msg <- "Not a Zelig object and not convertible to one."
+
+    # attempt to determine model type and initialize model
+    try_na <- function(x) tryCatch(x, error = function(c)
+                                   stop(not_found_msg, call. = FALSE))
+
+    model_info <- data.frame(
+                            class = try_na(class(obj)[1]),
+                            family = try_na(family(obj)$family),
+                            link = try_na(family(obj)$link),
+                            stringsAsFactors = FALSE
+                            )
+    zmodel <- merge(model_info, model_lookup_df)$zclass
+    if(length(zmodel) != 1) stop(not_found_msg, call. = FALSE)
+    message(sprintf("Assuming %s to convert to Zelig.", zmodel))
+
+    new_obj <- eval(parse(text = sprintf("%s$new()", zmodel)))
+    new_obj$mi <- FALSE
+    new_obj$bootstrap <- FALSE
+    new_obj$matched  <- FALSE
+    new_obj$mi <- FALSE
+    new_obj$data <- cbind(1, obj$model)
+    names(new_obj$data)[1] <- "by"
+    new_obj$by <- "by"
+    new_obj$data <- tbl_df(new_obj$data)
+    new_obj$formula <- as.Formula(obj$call$formula)
+    new_obj$weights <- NULL
+    new_obj$zelig.call <- obj$call
+    new_obj$model.call <- obj$call
+    new_obj$model.call$weights <- NULL
+
+    new_obj$zelig.out <- new_obj$data %>%
+        group_by_(new_obj$by) %>% do(z.out = obj)
+
+    #new_obj$zelig.out <- tibble::as_tibble(list(by = 1, z.out = obj))
+
+    return(new_obj)
+}
+
+#' Extract the original fitted model object from a \code{zelig} estimation
+#'
+#' @param obj a zelig object with an estimated model
+#'
+#' @details Extracts the original fitted model object from a \code{zelig}
+#'   estimation. This can be useful for passing output to non-Zelig
+#'   post-estimation functions and packages such as texreg and stargazer
+#'   for creating well-formatted presentation document tables.
+#'
+#' @examples
+#' z5 <- zls$new()
+#' z5$zelig(Fertility ~ Education, data = swiss)
+#' from_zelig_model(z5)
+#'
+#' @author Christopher Gandrud
+#' @export
+
+from_zelig_model <- function(obj) {
+  is_zelig(obj)
+
+  f5 <- obj$copy()
+  return(f5$from_zelig_model())
+}
+
+#' Extract simulated quantities of interest from a zelig object
+#'
+#' @param obj a zelig object with simulated quantities of interest
+#'
+#' @details A simulated quantities of interest in a tidy data formatted
+#'   `data.frame`. This can be useful for creating custom plots.
+#'
+#'  Each row contains a simulated value and each column contains:
+#'
+#'  - `setx_value` whether the simulations are from the base `x` `setx` or the
+#'      contrasting `x1` for finding first differences.
+#'  - The fitted values specified in `setx` including a `by` column if
+#'     `by` was used in the \code{\link{zelig}} call.
+#'  - `expected_value`
+#'  - `predicted_value`
+#'
+#'  For multinomial reponse models, a separate column is given for the expected
+#'    probability of each outcome in the form `expected_*`. Additionally, there
+#'    a is column of the predicted outcomes (`predicted_value`).
+#'
+#' @examples
+#' #### QIs without first difference or range, from covariates fitted at
+#' ## central tendencies
+#' z.1 <- zelig(Petal.Width ~ Petal.Length + Species, data = iris,
+#'              model = "ls")
+#' z.1 <- setx(z.1)
+#' z.1 <- sim(z.1)
+#' head(zelig_qi_to_df(z.1))
+#'
+#' #### QIs for first differences
+#' z.2 <- zelig(Petal.Width ~ Petal.Length + Species, data = iris,
+#'              model = "ls")
+#' z.2a <- setx(z.2, Petal.Length = 2)
+#' z.2b <- setx(z.2, Petal.Length = 4.4)
+#' z.2 <- sim(z.2, x = z.2a, x1 = z.2a)
+#' head(zelig_qi_to_df(z.2))
+#'
+#' #### QIs for first differences, estimated by Species
+#' z.3 <- zelig(Petal.Width ~ Petal.Length, by = "Species", data = iris,
+#'              model = "ls")
+#' z.3a <- setx(z.3, Petal.Length = 2)
+#' z.3b <- setx(z.3, Petal.Length = 4.4)
+#' z.3 <- sim(z.3, x = z.3a, x1 = z.3a)
+#' head(zelig_qi_to_df(z.3))
+#'
+#' #### QIs for a range of fitted values
+#' z.4 <- zelig(Petal.Width ~ Petal.Length + Species, data = iris,
+#'              model = "ls")
+#' z.4 <- setx(z.4, Petal.Length = 2:4)
+#' z.4 <- sim(z.4)
+#' head(zelig_qi_to_df(z.4))
+#'
+#' #### QIs for a range of fitted values, estimated by Species
+#' z.5 <- zelig(Petal.Width ~ Petal.Length, by = "Species", data = iris,
+#'             model = "ls")
+#' z.5 <- setx(z.5, Petal.Length = 2:4)
+#' z.5 <- sim(z.5)
+#' head(zelig_qi_to_df(z.5))
+#'
+#' #### QIs for two ranges of fitted values
+#' z.6 <- zelig(Petal.Width ~ Petal.Length + Species, data = iris,
+#'             model = "ls")
+#' z.6a <- setx(z.6, Petal.Length = 2:4, Species = "setosa")
+#' z.6b <- setx(z.6, Petal.Length = 2:4, Species = "virginica")
+#' z.6 <- sim(z.6, x = z.6a, x1 = z.6b)
+#'
+#' head(zelig_qi_to_df(z.6))
+#'
+#' @source For a discussion of tidy data see
+#' <https://www.jstatsoft.org/article/view/v059i10>.
+#'
+#' @seealso \code{\link{qi_slimmer}}
+#' @md
+#' @author Christopher Gandrud
+#' @export
+
+zelig_qi_to_df <- function(obj) {
+
+  is_zelig(obj)
+  is_sims_present(obj$sim.out)
+
+  comb <- data.frame()
+  if (is_simsx(obj$sim.out, fail = FALSE)) {
+    comb_temp <- extract_setx(obj)
+    comb <- rbind(comb, comb_temp)
+  }
+  if (is_simsx1(obj$sim.out, fail = FALSE)) {
+    comb_temp <- extract_setx(obj, which_x = 'x1')
+    comb <- rbind(comb, comb_temp)
+  }
+  if (is_simsrange(obj$sim.out, fail = FALSE)) {
+    comb_temp <- extract_setrange(obj)
+    comb <- rbind(comb, comb_temp)
+  }
+  if (is_simsrange1(obj$sim.out, fail = FALSE)) {
+    comb_temp <- extract_setrange(obj, which_range = 'range1')
+    comb <- rbind(comb, comb_temp)
+  }
+
+  # Need range1
+  if (nrow(comb) == 0) stop('Unable to find simulated quantities of interest.',
+                            call. = FALSE)
+  return(comb)
+}
+
+#' Extracted fitted values from a Zelig object with `setx` values
+#'
+#' @param obj a zelig object with simulated quantities of interest
+#'
+#' @details Fitted (`setx`) values in a tidy data formatted
+#'   `data.frame`. This was designed to enable the WhatIf package's
+#'   `whatif` function to extract "counterfactuals".
+#'
+#' @examples
+#' #### QIs without first difference or range, from covariates fitted at
+#' ## central tendencies
+#' z.1 <- zelig(Petal.Width ~ Petal.Length + Species, data = iris,
+#'              model = "ls")
+#' z.1 <- setx(z.1)
+#' zelig_setx_to_df(z.1)
+#'
+#' #### QIs for first differences
+#' z.2 <- zelig(Petal.Width ~ Petal.Length + Species, data = iris,
+#'              model = "ls")
+#' z.2 <- setx(z.2, Petal.Length = 2)
+#' z.2 <- setx1(z.2, Petal.Length = 4.4)
+#' zelig_setx_to_df(z.2)
+#'
+#' #### QIs for first differences, estimated by Species
+#' z.3 <- zelig(Petal.Width ~ Petal.Length, by = "Species", data = iris,
+#'              model = "ls")
+#' z.3 <- setx(z.3, Petal.Length = 2)
+#' z.3 <- setx1(z.3, Petal.Length = 4.4)
+#' zelig_setx_to_df(z.3)
+#'
+#' #### QIs for a range of fitted values
+#' z.4 <- zelig(Petal.Width ~ Petal.Length + Species, data = iris,
+#'              model = "ls")
+#' z.4 <- setx(z.4, Petal.Length = 2:4)
+#' zelig_setx_to_df(z.4)
+#'
+#' #### QIs for a range of fitted values, estimated by Species
+#' z.5 <- zelig(Petal.Width ~ Petal.Length, by = "Species", data = iris,
+#'              model = "ls")
+#' z.5 <- setx(z.5, Petal.Length = 2:4)
+#' zelig_setx_to_df(z.5)
+#'
+#' #### QIs for two ranges of fitted values
+#' z.6 <- zelig(Petal.Width ~ Petal.Length + Species, data = iris,
+#'              model = "ls")
+#' z.6 <- setx(z.6, Petal.Length = 2:4, Species = "setosa")
+#' z.6 <- setx1(z.6, Petal.Length = 2:4, Species = "virginica")
+#' zelig_setx_to_df(z.6)
+#'
+#' @md
+#' @author Christopher Gandrud
+#' @export
+
+zelig_setx_to_df <- function(obj) {
+
+    is_zelig(obj)
+
+    comb <- data.frame()
+    if (!is.null(obj$setx.out$x)) {
+        comb_temp <- extract_setx(obj, only_setx = TRUE)
+        comb <- rbind(comb, comb_temp)
+    }
+    if (!is.null(obj$setx.out$x1)) {
+        comb_temp <- extract_setx(obj, which_x = 'x1', only_setx = TRUE)
+        comb <- rbind(comb, comb_temp)
+    }
+    if (!is.null(obj$setx.out$range)) {
+        comb_temp <- extract_setrange(obj, only_setx = TRUE)
+        comb <- rbind(comb, comb_temp)
+    }
+    if (!is.null(obj$setx.out$range1)) {
+        comb_temp <- extract_setrange(obj, which_range = 'range1',
+                                      only_setx = TRUE)
+        comb <- rbind(comb, comb_temp)
+    }
+
+    # Need range1
+    if (nrow(comb) == 0) stop('Unable to find fitted (setx) values.',
+                              call. = FALSE)
+    return(comb)
+}
+
+
+#' Extract setx for non-range and return tidy formatted data frame
+#'
+#' @param obj a zelig object containing simulated quantities of interest
+#' @param which_x character string either `'x'` or `'x1'` indicating whether
+#'   to extract the first or second set of fitted values
+#' @param only_setx logical whether or not to only extract `setx`` values.
+#'
+#' @seealso \code{\link{zelig_qi_to_df}}
+#' @author Christopher Gandrud
+#'
+#' @md
+#' @keywords internal
+
+extract_setx <- function(obj, which_x = 'x', only_setx = FALSE) {
+
+    temp_comb <- data.frame()
+    all_fitted <- obj$setx.out[[which_x]]
+    if (!only_setx) all_sims <- obj$sim.out[[which_x]]
+
+    temp_fitted <- as.data.frame(all_fitted$mm[[1]],
+                                row.names = NULL)
+
+    by_length <- nrow(all_fitted)
+    if (by_length > 1) {
+        temp_fitted <- temp_fitted[rep(seq_len(nrow(temp_fitted)), by_length), ]
+        temp_fitted <- data.frame(by = all_fitted[[1]],
+                                    temp_fitted, row.names = NULL)
+    }
+    temp_fitted <- rm_intercept(temp_fitted)
+    temp_fitted <- factor_coef_combine(obj, temp_fitted)
+
+    if (!only_setx) {
+        temp_ev <- lapply(all_sims$ev, unlist)
+        temp_pv <- lapply(all_sims$pv, unlist)
+
+        for (i in 1:nrow(temp_fitted)) {
+            temp_qi <- data.frame(temp_ev[[i]], temp_pv[[i]])
+            if (ncol(temp_qi) == 2)
+                names(temp_qi) <- c('expected_value', 'predicted_value')
+            else if (ncol(temp_qi) > 2 & is.factor(temp_pv[[i]]))
+                names(temp_qi) <- c(sprintf('expected_%s', colnames(temp_ev[[i]])),
+                                    'predicted_value')
+
+            temp_df <- cbind(temp_fitted[i, ], temp_qi, row.names = NULL)
+            temp_comb <- rbind(temp_comb, temp_df)
+        }
+        temp_comb$setx_value <- which_x
+        temp_comb <- temp_comb[, c(ncol(temp_comb), 1:(ncol(temp_comb)-1))]
+
+        return(temp_comb)
+    }
+    else if (only_setx) return(temp_fitted)
+
+}
+
+#' Extract setrange to return as tidy formatted data frame
+#'
+#' @param obj a zelig object containing a range of simulated quantities of
+#'   interest
+#' @param which_range character string either `'range'` or `'range1'`
+#'   indicating whether to extract the first or second set of fitted values
+#' @param only_setx logical whether or not to only extract `setx`` values.
+#'
+#' @seealso \code{\link{zelig_qi_to_df}}
+#' @author Christopher Gandrud
+#'
+#' @md
+#' @keywords internal
+
+extract_setrange <- function(obj, which_range = 'range', only_setx = FALSE) {
+
+    temp_comb <- data.frame()
+    all_fitted <- obj$setx.out[[which_range]]
+    if (!only_setx) all_sims <- obj$sim.out[[which_range]]
+
+    for (i in 1:length(all_fitted)) {
+        temp_fitted <- as.data.frame(all_fitted[[i]]$mm[[1]], row.names = NULL)
+
+        by_length <- nrow(all_fitted[[i]])
+        if (by_length > 1) {
+            temp_fitted <- temp_fitted[rep(seq_len(nrow(temp_fitted)),
+                                     by_length), ]
+            temp_fitted <- data.frame(by = all_fitted[[i]][[1]], temp_fitted,
+                                        row.names = NULL)
+        }
+        temp_fitted <- rm_intercept(temp_fitted)
+        temp_fitted <- factor_coef_combine(obj, temp_fitted)
+
+        if (!only_setx) {
+            temp_ev <- lapply(all_sims[[i]]$ev, unlist)
+            temp_pv <- lapply(all_sims[[i]]$pv, unlist)
+
+            temp_comb_1_range <- data.frame()
+            for (u in 1:nrow(temp_fitted)) {
+                temp_qi <- data.frame(temp_ev[[u]], temp_pv[[u]])
+
+                if (ncol(temp_qi) == 2)
+                    names(temp_qi) <- c('expected_value', 'predicted_value')
+                else if (ncol(temp_qi) > 2 & is.factor(temp_pv[[u]]))
+                    names(temp_qi) <- c(sprintf('expected_%s', colnames(temp_ev[[u]])),
+                                        'predicted_value')
+
+                temp_df <- cbind(temp_fitted[u, ], temp_qi, row.names = NULL)
+                temp_comb_1_range <- rbind(temp_comb_1_range, temp_df)
+            }
+            temp_comb <- rbind(temp_comb, temp_comb_1_range)
+        }
+        else if (only_setx) {
+            temp_comb <- rbind(temp_comb, temp_fitted)
+        }
+    }
+    if (!only_setx) {
+        if (which_range == 'range') temp_comb$setx_value <- 'x'
+        else temp_comb$setx_value <- 'x1'
+        temp_comb <- temp_comb[, c(ncol(temp_comb), 1:(ncol(temp_comb)-1))]
+    }
+    return(temp_comb)
+}
+
+#' Return individual factor coefficient fitted values to single factor variable
+#'
+#' @param obj a zelig object with an estimated model
+#' @param fitted a data frame with values fitted by \code{setx}. Note
+#' created internally by \code{\link{extract_setx}} and
+#'   \code{\link{extract_setrange}}
+#'
+#' @author Christopher Gandrud
+#' @keywords internal
+
+factor_coef_combine <- function(obj, fitted) {
+
+    is_zelig(obj)
+
+    if (!('mcmc' %in% class(obj$zelig.out$z.out[[1]]))) { # find a more general solution
+        original_data <- obj$zelig.out$z.out[[1]]$model
+        factor_vars <- sapply(original_data, is.factor)
+        if (any(factor_vars)) {
+            for (i in names(original_data)[factor_vars]) {
+                if (!(i %in% names(fitted))) {
+                    matches_name <- names(fitted)[grepl(sprintf('^%s*', i),
+                                                        names(fitted))]
+                    var_levels <- levels(original_data[, i])
+                    fitted[, i] <- NA
+                    for (u in matches_name) {
+                        label_value <- gsub(sprintf('^%s', i), '', u)
+                        fitted[, i][fitted[, u] == 1] <- label_value
+                    }
+                    ref_level <- var_levels[!(var_levels %in%
+                                                  gsub(sprintf('^%s', i), '',
+                                                       matches_name))]
+                    fitted[, i][is.na(fitted[, i])] <- ref_level
+                    fitted[, i] <- factor(fitted[, i], levels = var_levels)
+                    fitted <- fitted[, !(names(fitted) %in% matches_name)]
+                }
+            }
+        }
+    }
+    return(fitted)
+}
+
+
+#' Find the median and a central interval of simulated quantity of interest
+#' distributions
+#'
+#' @param df a tidy-formatted data frame of simulated quantities of interest
+#'   created by \code{\link{zelig_qi_to_df}}.
+#' @param qi_type character string either `ev` or `pv` for returning the
+#'   central intervals for the expected value or predicted value, respectively.
+#' @param ci numeric. The central interval to return, expressed on the
+#' `(0, 100]` or the equivalent `(0, 1]` interval.
+#'
+#' @details A tidy-formatted data frame with the following columns:
+#'
+#'   - The values fitted with \code{\link{setx}}
+#'   - `qi_ci_min`: the minimum value of the central interval specified with
+#'   `ci`
+#'   - `qi_ci_median`: the median of the simulated quantity of interest
+#'   distribution
+#'   - `qi_ci_max`: the maximum value of the central interval specified with
+#'   `ci`
+#'
+#' @examples
+#' library(dplyr)
+#' qi.central.interval <- zelig(Petal.Width ~ Petal.Length + Species,
+#'              data = iris, model = "ls") %>%
+#'              setx(Petal.Length = 2:4, Species = "setosa") %>%
+#'              sim() %>%
+#'              zelig_qi_to_df() %>%
+#'              qi_slimmer()
+#'
+#' @importFrom dplyr bind_rows %>%
+#' @seealso \code{\link{zelig_qi_to_df}}
+#' @author Christopher Gandrud
+#' @md
+
+qi_slimmer <- function(df, qi_type = 'ev', ci = 0.95) {
+    qi__ <- scenario__ <- NULL
+
+    if (qi_type == 'ev') qi_type <- 'expected_value'
+    if (qi_type == 'pv') qi_type <- 'predicted_value'
+
+    if (!is.data.frame(df))
+        stop('df must be a data frame created by zelig_qi_to_df.',
+             call. = FALSE)
+
+    names_df <- names(df)
+    if (!any(c('expected_value', 'predicted_value') %in% names_df))
+        stop('The data frame does not appear to have been created by zelig_qi_to_df.',
+             call. = FALSE)
+
+    ci <- ci_check(ci)
+    lower <- (1 - ci)/2
+    upper <- 1 - lower
+
+    if (length(qi_type) != 1)
+        stop('Only one qi_type allowed per function call.', call. = FALSE)
+
+    qi_stripped <- gsub('_.*', '', qi_type)
+    if (!(qi_stripped %in% c('expected', 'predicted')))
+        stop('qi_type must be one of "ev", "pv", "expected_*" or "predicted_*". ',
+             call. = FALSE)
+
+    qi_df_location <- grep(qi_stripped, names_df)
+    qi_length <- length(qi_df_location)
+
+    if (qi_length > 1 & qi_type %in% c('ev', 'expected_value')) {
+        message(sprintf('\nMore than one %s values found. Returning slimmed expected values for the first outcome.\nIf another is desired please enter its name in qi_type.\n',
+                        qi_stripped))
+        qi_var <- names_df[qi_df_location[1]]
+    }
+    else qi_var <- qi_type
+
+    if (qi_stripped %in% 'expected'& length(qi_df_location) == 1)
+        qi_drop <- 'predicted'
+    else if ((qi_stripped %in% 'expected') & length(qi_df_location) > 1) {
+        other_expected <- names_df[qi_df_location]
+        other_expected <- other_expected[!(other_expected %in% qi_var)]
+        qi_drop <- c(other_expected, 'predicted_value')
+    }
+    else qi_drop <- 'expected'
+
+    if (qi_stripped %in% 'expected') qi_msg <- 'Expected Values'
+    else qi_msg <- 'Predicted Values'
+    message(sprintf('Slimming %s . . .', qi_msg))
+
+    # drop non-requested qi_type
+    if (length(qi_drop) == 1)
+        df <- df[, !(gsub('_.*', '', names_df) %in% qi_drop)]
+    else if (length(qi_drop) > 1)
+        df <- df[!(names_df %in% qi_drop)]
+
+    names(df)[names(df) == qi_var] <- 'qi__'
+    df$scenario__ <- interaction(df[, !(names(df) %in% 'qi__')], drop = TRUE)
+
+    qi_list <- split(df, df[['scenario__']])
+    qi_list <- lapply(seq_along(qi_list), function(x) {
+        if (!is.factor(qi_list[[x]][, 'qi__'])) {
+            lower_bound <- quantile(qi_list[[x]][, 'qi__'], prob = lower)
+            upper_bound <- quantile(qi_list[[x]][, 'qi__'], prob = upper)
+            subset(qi_list[[x]], qi__ >= lower_bound & qi__ <= upper_bound)
+        }
+        else if (is.factor(qi_list[[x]][, 'qi__'])) { # Categorical outcomes
+            prop_outcome <- as.data.frame.matrix(
+                                t(table(qi_list[[x]][, 'qi__']) /
+                                              nrow(qi_list[[x]])))
+            names(prop_outcome) <- sprintf('predicted_proportion_(Y=%s)',
+                                           1:ncol(prop_outcome))
+            cbind(qi_list[[x]][1, ], prop_outcome)
+        }
+    })
+    df_slimmed <- data.frame(bind_rows(qi_list))
+    names(df_slimmed) <- names(qi_list[[1]])
+
+    if (!is.factor(df_slimmed$qi__)) {
+        df_out <- df_slimmed %>% group_by(scenario__) %>%
+            summarise(qi_ci_min = min(qi__),
+                      qi_ci_median = median(qi__),
+                      qi_ci_max = max(qi__)
+            ) %>%
+            data.frame
+        scenarios_df <- df[!duplicated(df$scenario__), !(names(df) %in% 'qi__')] %>%
+            data.frame(row.names = NULL)
+        df_out <- merge(scenarios_df, df_out, by = 'scenario__', sort = FALSE)
+    }
+    else df_out <- df_slimmed
+
+    df_out$scenario__ <- NULL
+    df_out$qi__ <- NULL
+
+    return(df_out)
+}
+
+#' Convert \code{ci} interval from percent to proportion and check if valid
+#' @param x numeric. The central interval to return, expressed on the `(0, 100]`
+#' or the equivalent `(0, 1]` interval.
+#'
+#' @md
+#' @keywords internal
+
+ci_check <- function(x) {
+    if (x > 1 & x <= 100) x <- x / 100
+    if (x <= 0 | x > 1) {
+        stop(sprintf("%s will not produce a valid central interval.", x),
+              call. = FALSE)
+    }
+    return(x)
+}
diff --git a/R/model-ar.R b/R/model-ar.R
new file mode 100755
index 0000000..779acee
--- /dev/null
+++ b/R/model-ar.R
@@ -0,0 +1,90 @@
+#' Time-Series Model with Autoregressive Disturbance
+#'
+#' Warning: \code{summary} does not work with timeseries models after
+#' simulation.
+#'
+#' @param formula a symbolic representation of the model to be
+#'   estimated, in the form \code{y ~ x1 + x2}, where \code{y} is the
+#'   dependent variable and \code{x1} and \code{x2} are the explanatory
+#'   variables, and \code{y}, \code{x1}, and \code{x2} are contained in the
+#'   same dataset. (You may include more than two explanatory variables,
+#'   of course.) The \code{+} symbol means ``inclusion'' not
+#'   ``addition.'' You may also include interaction terms and main
+#'   effects in the form \code{x1*x2} without computing them in prior
+#'   steps; \code{I(x1*x2)} to include only the interaction term and
+#'   exclude the main effects; and quadratic terms in the form
+#'   \code{I(x1^2)}.
+#' @param model the name of a statistical model to estimate.
+#'   For a list of supported models and their documentation see:
+#'   \url{http://docs.zeligproject.org/articles/}.
+#' @param data the name of a data frame containing the variables
+#'   referenced in the formula or a list of multiply imputed data frames
+#'   each having the same variable names and row numbers (created by
+#'   \code{Amelia} or \code{\link{to_zelig_mi}}).
+#' @param ... additional arguments passed to \code{zelig},
+#'   relevant for the model to be estimated.
+#' @param by a factor variable contained in \code{data}. If supplied,
+#'   \code{zelig} will subset
+#'   the data frame based on the levels in the \code{by} variable, and
+#'   estimate a model for each subset. This can save a considerable amount of
+#'   effort. For example, to run the same model on all fifty states, you could
+#'   use: \code{z.out <- zelig(y ~ x1 + x2, data = mydata, model = 'ls',
+#'   by = 'state')} You may also use \code{by} to run models using MatchIt
+#'   subclasses.
+#' @param cite If is set to 'TRUE' (default), the model citation will be printed
+#'   to the console.
+#' @param ts The name of the variable containing the time indicator. This should be passed in as
+#'     a string. If this variable is not provided, Zelig will assume that the data is already
+#'     ordered by time.
+#' @param cs Name of a variable that denotes the cross-sectional element of the data, for example,
+#'  country name in a dataset with time-series across different countries. As a variable name,
+#'  this should be in quotes. If this is not provided, Zelig will assume that all observations
+#'  come from the same unit over time, and should be pooled, but if provided, individual models will
+#'  be run in each cross-section.
+#'  If \code{cs} is given as an argument, \code{ts} must also be provided. Additionally, \code{by}
+#'  must be \code{NULL}.
+#' @param order A vector of length 3 passed in as \code{c(p,d,q)} where p represents the order of the
+#'     autoregressive model, d represents the number of differences taken in the model, and q represents
+#'     the order of the moving average model.
+#' @details
+#' Currently only the Reference class syntax for time series. This model does not accept
+#' Bootstraps or weights.
+#'
+#'
+#' @return Depending on the class of model selected, \code{zelig} will return
+#'   an object with elements including \code{coefficients}, \code{residuals},
+#'   and \code{formula} which may be summarized using
+#'   \code{summary(z.out)} or individually extracted using, for example,
+#'   \code{coef(z.out)}. See
+#'   \url{http://docs.zeligproject.org/articles/getters.html} for a list of
+#'   functions to extract model components. You can also extract whole fitted
+#'   model objects using \code{\link{from_zelig_model}}.
+#' @examples
+#' data(seatshare)
+#' subset <- seatshare[seatshare$country == "UNITED KINGDOM",]
+#' ts.out <- zelig(formula = unemp ~ leftseat, model = "ar", ts = "year", data = subset)
+#' summary(ts.out)
+#'
+#' @seealso Vignette: \url{http://docs.zeligproject.org/articles/zelig_ar.html}
+#'
+#' @import methods
+#' @export Zelig-ar
+#' @exportClass Zelig-ar
+#'
+#' @include model-zelig.R
+#' @include model-timeseries.R
+zar <- setRefClass("Zelig-ar",
+                       contains = "Zelig-timeseries")
+
+zar$methods(
+  initialize = function() {
+    callSuper()
+    .self$name <- "ar"
+    .self$link <- "identity"
+    .self$fn <- quote(zeligArimaWrapper)
+    .self$description = "Time-Series Model with Autoregressive Disturbance"
+    .self$packageauthors <- "R Core Team"
+    .self$outcome <- "continuous"
+    .self$wrapper <- "timeseries"
+  }
+)
diff --git a/R/model-arima.R b/R/model-arima.R
new file mode 100755
index 0000000..a68c17c
--- /dev/null
+++ b/R/model-arima.R
@@ -0,0 +1,454 @@
+#' Autoregressive and Moving-Average Models with Integration for Time-Series Data
+#'
+#' Warning: \code{summary} does not work with timeseries models after
+#' simulation.
+#'
+#' @param formula a symbolic representation of the model to be
+#'   estimated, in the form \code{y ~ x1 + x2}, where \code{y} is the
+#'   dependent variable and \code{x1} and \code{x2} are the explanatory
+#'   variables, and \code{y}, \code{x1}, and \code{x2} are contained in the
+#'   same dataset. (You may include more than two explanatory variables,
+#'   of course.) The \code{+} symbol means ``inclusion'' not
+#'   ``addition.'' You may also include interaction terms and main
+#'   effects in the form \code{x1*x2} without computing them in prior
+#'   steps; \code{I(x1*x2)} to include only the interaction term and
+#'   exclude the main effects; and quadratic terms in the form
+#'   \code{I(x1^2)}.
+#' @param model the name of a statistical model to estimate.
+#'   For a list of supported models and their documentation see:
+#'   \url{http://docs.zeligproject.org/articles/}.
+#' @param data the name of a data frame containing the variables
+#'   referenced in the formula or a list of multiply imputed data frames
+#'   each having the same variable names and row numbers (created by
+#'   \code{Amelia} or \code{\link{to_zelig_mi}}).
+#' @param ... additional arguments passed to \code{zelig},
+#'   relevant for the model to be estimated.
+#' @param by a factor variable contained in \code{data}. If supplied,
+#'   \code{zelig} will subset
+#'   the data frame based on the levels in the \code{by} variable, and
+#'   estimate a model for each subset. This can save a considerable amount of
+#'   effort. For example, to run the same model on all fifty states, you could
+#'   use: \code{z.out <- zelig(y ~ x1 + x2, data = mydata, model = 'ls',
+#'   by = 'state')} You may also use \code{by} to run models using MatchIt
+#'   subclasses.
+#' @param cite If is set to 'TRUE' (default), the model citation will be printed
+#'   to the console.
+#' @param ts The name of the variable containing the time indicator. This should be passed in as
+#'     a string. If this variable is not provided, Zelig will assume that the data is already
+#'     ordered by time.
+#' @param cs Name of a variable that denotes the cross-sectional element of the data, for example,
+#'  country name in a dataset with time-series across different countries. As a variable name,
+#'  this should be in quotes. If this is not provided, Zelig will assume that all observations
+#'  come from the same unit over time, and should be pooled, but if provided, individual models will
+#'  be run in each cross-section.
+#'  If \code{cs} is given as an argument, \code{ts} must also be provided. Additionally, \code{by}
+#'  must be \code{NULL}.
+#' @param order A vector of length 3 passed in as \code{c(p,d,q)} where p represents the order of the
+#'     autoregressive model, d represents the number of differences taken in the model, and q represents
+#'     the order of the moving average model.
+#' @details
+#' Currently only the Reference class syntax for time series. This model does not accept
+#' Bootstraps or weights.
+#'
+#'
+#' @return Depending on the class of model selected, \code{zelig} will return
+#'   an object with elements including \code{coefficients}, \code{residuals},
+#'   and \code{formula} which may be summarized using
+#'   \code{summary(z.out)} or individually extracted using, for example,
+#'   \code{coef(z.out)}. See
+#'   \url{http://docs.zeligproject.org/articles/getters.html} for a list of
+#'   functions to extract model components. You can also extract whole fitted
+#'   model objects using \code{\link{from_zelig_model}}.
+#' @examples
+#' data(seatshare)
+#' subset <- seatshare[seatshare$country == "UNITED KINGDOM",]
+#' ts.out <- zarima$new()
+#' ts.out$zelig(unemp ~ leftseat, order = c(1, 0, 1), data = subset)
+#'
+#' # Set fitted values and simulate quantities of interest
+#' ts.out$setx(leftseat = 0.75)
+#' ts.out$setx1(leftseat = 0.25)
+#' ts.out$sim()
+#'
+#' @seealso Vignette: \url{http://docs.zeligproject.org/articles/zelig_arima.html}
+#' @import methods
+#' @export Zelig-arima
+#' @exportClass Zelig-arima
+#'
+#' @include model-zelig.R
+#' @include model-timeseries.R
+
+zarima <- setRefClass("Zelig-arima",
+                      contains = "Zelig-timeseries")
+
+zarima$methods(
+    initialize = function() {
+        callSuper()
+        .self$name <- "arima"
+        .self$link <- "identity"
+        #.self$family <- "gaussian"
+        .self$fn <- quote(zeligArimaWrapper)
+        #.self$linkinv <- eval(call(.self$family, .self$link))$linkinv
+        .self$description <- "Autoregressive Moving-Average Models for Time-Series Data"
+        # JSON
+        .self$outcome <- "continuous"
+        .self$wrapper <- "timeseries"
+    }
+)
+
+zarima$methods(
+  qi = function(simparam, mm, mm1=NULL){
+
+    myorder <- eval(.self$zelig.call$order)
+    mycoef <- coef(.self$zelig.out$z.out[[1]])
+    sd <- sqrt(.self$zelig.out$z.out[[1]]$sigma2)
+
+    ## Check mm and mm1.  Particularly for issues surrounding intercept.
+    rebuildMM <- function(simparam, x){
+      xnames <- colnames(x)
+      snames <- colnames(simparam)
+      ## parameter "intercept" can be spelt "(Intercept)"" in model matrix
+      if("(Intercept)" %in% xnames){
+        flag <- xnames == "(Intercept)"
+        xnames[flag] <- "intercept"
+        colnames(x)[flag]<- "intercept" # this is equivalent to: colnames(x) <- xnames
+      }
+      ## "intercept" can be included in model matrix when not an estimated parameter (for example in models with integration)
+      xnamesflag <- xnames %in% snames
+      x <- x[, xnamesflag, drop=FALSE]
+      return(x)
+    }
+
+    mm <- rebuildMM(simparam, mm)
+    if(!is.null(mm1)){
+      mm1 <- rebuildMM(simparam, mm1)
+    }
+
+
+    ## Make ACF
+    acf <- simacf(coef=mycoef, order=myorder, params=simparam, alpha=0.05)
+    acf.length <- length(acf$expected.acf)
+    t1 <- 2*acf.length
+    t2 <- 2*acf.length
+
+
+    if((.self$bsetx1)||(.self$bsetx && !.self$bsetx1)){             # could also check if mm1 is NULL
+      # zeligARMAbreakforecaster() calls zeligARMAlongrun() internally
+      #  return(y.shock = yseries, y.innovation = y.innov, ev.shock = evseries, ev.innovation = ev.innov)
+      yseries <- zeligARMAbreakforecaster(y.init=NULL, x=mm, x1=mm1, simparam=simparam, order=myorder, sd=sd, t1=t1, t2=t2)
+      # maybe check nrow(yseries)=t1 + t2 ?
+
+      pv <- yseries$y.innovation[t1,]                # could use either $innovation or $shock here
+      pv.shortrun <- yseries$y.innovation[t1+1,]     # could use either $innovation or $shock here
+      pv.longrun <- yseries$y.innovation[t1+t2,]     # must use $innovation here
+
+      # Remember, these are expectations using the same simparam in each expectation.
+      ev <- yseries$ev.innovation[t1,]
+      ev.shortrun <- yseries$ev.innovation[t1+1,]
+      ev.longrun <- yseries$ev.innovation[t1+t2,]
+
+      return(list(acf = acf, ev = ev, pv = pv, pv.shortrun=pv.shortrun, pv.longrun=pv.longrun, ev.shortrun=ev.shortrun, ev.longrun=ev.longrun,
+                pvseries.shock=yseries$y.shock, pvseries.innovation=yseries$y.innovation,
+                evseries.shock=yseries$ev.shock, evseries.innovation=yseries$ev.innovation))
+
+    }else{
+      # just call zeligARMAlongrun()
+      yseries <- zeligARMAlongrun(y.init=NULL, x=mm, simparam=simparam, order=myorder, sd=sd)
+      pv <- yseries$y[1,]   # zeligARMAlongrun returns the series in reverse order to zeligARMAbreakforecaster
+      # Remember, these are expectations using the same simparam in each expectation:
+      ev <- yseries$ev[1,]
+      return(list(acf = acf, ev = ev, pv = pv))
+    }
+  }
+)
+
+zarima$methods(
+    mcfun = function(x, b0=0, b1=1, ..., sim=TRUE){
+        mu <- exp(b0 + b1 * x)
+        if(sim){
+            y <- rnorm(n=length(x), mean=mu)
+            return(y)
+        }else{
+            return(mu)
+        }
+    }
+)
+
+#' Estimation wrapper function for arima models, to easily fit with Zelig architecture
+#' @keywords internal
+
+zeligArimaWrapper <- function(formula, order = c(1, 0, 0), ... ,
+                              include.mean = TRUE, data){
+
+    # Using with():
+    # myArimaCall <- quote( arima(x=, order =, xreg= ) )
+    # output <- with(data, myArimaCall )
+
+
+    # Using arima() directly:
+    mf <- model.frame(formula, data)
+
+    acf3 <- as.character(formula[[3]])
+
+    yflag <- names(mf) %in% all.vars(formula[-3])
+    xflag <- names(mf) %in% all.vars(formula[-2])
+    myx <- as.matrix(mf[,yflag, drop = FALSE])  # could use get_all_vars()
+    is_varying(as.vector(myx), msg = 'Dependent variable does not vary for at least one of the cases.')
+    myxreg <- as.matrix(mf[,xflag, drop = FALSE])
+
+    if (("1" %in% acf3 ) & ("-" %in% acf3 )){
+        include.mean <- FALSE
+    }
+
+    output <- stats::arima(x = myx, order = order, xreg = myxreg,
+                           include.mean = include.mean, ...)
+
+}
+
+
+#' Construct Autocorrelation Function from Zelig object and simulated parameters
+#' @keywords internal
+
+simacf <- function(coef, order, params, alpha = 0.5){
+
+    #order <- eval(.self$zelig.call$order)
+    myar <- myma <- myar.seq <- myma.seq <- NULL
+
+    if(order[1]>0){
+        arnames <- paste("ar", 1:order[1], sep="")
+        myar <- coef[arnames]
+        myar.seq <- params[, arnames, drop=FALSE]
+    }
+
+    if(order[3]>0){
+        manames <- paste("ma", 1:order[3], sep="")
+        myma <- coef[manames]
+        myma.seq <- params[, manames, drop=FALSE]
+    }
+
+    mylag.max<-10  # Need to set automatically.
+
+    n.sims<-nrow(params)
+    expected.acf <- ARMAacf(ar=myar, ma=myma, lag.max=mylag.max)
+    acf.history<-matrix(NA, nrow=n.sims, ncol=length(expected.acf))      # length(expected.acf) = mylag.max +1
+    for(i in 1:n.sims){
+        acf.history[i,] <- ARMAacf(ar=myar.seq[i,], ma=myma.seq[i,], lag.max=mylag.max)
+    }
+
+
+    # Define functions to compute confidence intervals for each column in a matrix
+    ci.matrix <- function(x, alpha) {
+        pos.hi <- max(round((1-(alpha/2))*nrow(x)), 1)
+        pos.low <-max(round((alpha/2)*nrow(x)), 1)
+
+        ci.lower <- ci.upper <- rep(NA, ncol(x))
+        for(i in 1:ncol(x)){
+            temp<-sort(x[,i])
+            ci.lower[i]<-temp[pos.low]
+            ci.upper[i]<-temp[pos.hi]
+        }
+        return(list(ci.lower=ci.lower, ci.upper=ci.upper))
+    }
+    ci.acf <- ci.matrix(x=acf.history, alpha=0.05)
+
+    return(list(expected.acf=expected.acf, ci.acf=ci.acf, sims.acf=acf.history))
+}
+
+
+#' Construct Simulated Next Step in Dynamic Series
+#' @keywords internal
+
+zeligARMAnextstep <- function(yseries=NULL, xseries, wseries=NULL, beta, ar=NULL, i=NULL, ma=NULL, sd){
+
+    ## Check inputs
+    # t is obs across time
+    # s is sims
+    # k is covariates
+    # order is (p,q,r)
+    # assume yseries (t x sims), xseries (t x k), wseries (t x s), beta (s x k), ar (s x p), ma (s x r) are matrix
+    # assume sd is scalar
+
+    ## Could construct these by using known order more deliberatively
+
+    if(is.vector(yseries)){
+        #print("warning: yseries is vector")
+        yseries <- matrix(yseries, nrow=1)        # Assume if y is a vector, that we are only running one simulation chain of y, so y is (t x 1)
+    }
+    if(is.vector(xseries)){
+        #print("warning: xseries is vector")
+        xseries <- matrix(xseries, nrow=1)        # Assume if x is a vector, that there are no lagged terms, so x is (1 x k)
+    }
+    if(is.vector(wseries)){
+        #print("warning: wseries is vector")
+        wseries <- matrix(wseries, nrow=1)        # Assume if w is a vector, that we are only running one simulation chain of y, so w is (t x 1)
+    }
+    if(is.vector(beta)){
+        #print("warning: beta is vector")
+        beta <- matrix(beta, ncol=1)
+    }
+    if(is.vector(ar)){
+        #print("warning: ar is vector")
+        ar <- matrix(ar, ncol=1)
+    }
+    if(is.vector(ma)){
+        #print("warning: ma is vector")
+        ma <- matrix(ma, ncol=1)
+    }
+
+    ar.term <- function(yseries, ar, n){
+        yshort <- yseries[1:ncol(ar), , drop=FALSE]           # because we only need the diagonal of a square matrix, we can avoid full matrix multiplication
+        return( rowSums( ar * t(yshort) ) )       # diag[(s x p) . (p x s)] = diag[(s x s)] = (s x 1)
+    }
+    xt.term <- function(xseries, beta){
+        return( as.vector(beta %*% t(xseries)) )  # (s x k) . t(1 x k) = (s x 1)
+    }
+    ma.term <- function(wseries, ma){
+        wshort <- wseries[1:ncol(ma), , drop=FALSE]
+        return( rowSums( ma * t(wshort)) )        # diag[(s x r) . (r x s)] = diag[(s x s)] = (s x 1)
+    }
+
+    n.sims <- ncol(yseries)
+    w <- rnorm(n=n.sims, mean=0, sd=sd)
+    y <- xt.term(xseries,beta) + w              # conformable if xt is vector and w vector
+    if(!is.null(ar)){
+        y <- y + ar.term(yseries,ar)              # conformable if y vector and ar vector
+    }
+    if(!is.null(ma)){
+        y <- y + ma.term(wseries,ma)              # conformable if y vector and ma vector
+    }
+
+    exp.y <- y - w                              # one interpretation of an EV QI:  E(y| l(w), l(y))
+    return(list(y=y, w=w, exp.y=exp.y))
+}
+
+
+#' Calculate the Long Run Exquilibrium for Fixed X
+#' @keywords internal
+
+zeligARMAlongrun <- function(y.init=NULL, x, simparam, order, sd, tol=NULL, burnin=20){
+    if(is.null(tol)){
+        tol<-0.01
+    }
+    ar <- i <- ma <- NULL
+
+    ## Ensure parameter simulations in same order as model matrix
+    xnames <- colnames(x)
+    beta <- simparam[,xnames]
+
+    ## Extract AR and MA terms
+    if(order[1]>0){
+        arnames <- paste("ar", 1:order[1], sep="")
+        ar <- simparam[,arnames]
+    }
+    if(order[3]>0){
+        manames <- paste("ma", 1:order[3], sep="")
+        ma <- simparam[,manames]
+    }
+    timepast <- max(order[1],order[3])
+
+    n.sims <- nrow(simparam)
+
+    if(is.vector(x)){
+        x<-matrix(x,nrow=1, ncol=length(x))
+    }
+
+    if(is.null(y.init)){
+        if (!is.matrix(beta)) beta <- matrix(beta, ncol = 1)
+        betabar <- t(apply(beta, 2, mean))
+        y.init <- x %*% t(beta)
+    }
+
+    yseries <- matrix(y.init, nrow=timepast, ncol=n.sims, byrow=TRUE)
+    wseries <- matrix(rnorm(n=timepast*n.sims), nrow=timepast, ncol=n.sims)
+    evseries <- matrix(NA, nrow=timepast, ncol=n.sims)
+
+    finished <- FALSE
+    count <- 0
+    while(!finished){
+        y <- zeligARMAnextstep(yseries=yseries[1:timepast, ], xseries=x,
+                               wseries=wseries[1:timepast, ], beta = beta,
+                               ar = ar, i = i, ma = ma, sd = sd)
+        yseries <- rbind(y$y, yseries)
+        wseries <- rbind(y$w, wseries)
+        evseries<- rbind(y$exp.y, evseries)
+
+        #diff <- mean(abs(y.1 - y.0))  # Eventually need to determine some automated stopping rule
+        count <- count+1
+        finished <- count > burnin #| (diff < tol)
+    }
+
+    return(list(y.longrun=yseries, w.longrun=wseries, ev.longrun=evseries))
+}
+
+
+#' Construct Simulated Series with Internal Discontinuity in X
+#' @keywords internal
+
+zeligARMAbreakforecaster <- function(y.init=NULL, x, x1, simparam, order, sd, t1=5, t2=10){
+
+    longrun.out <- zeligARMAlongrun(y.init=y.init, x=x, simparam=simparam, order=order, sd=sd)
+    yseries  <- longrun.out$y.longrun
+    wseries  <- longrun.out$w.longrun
+    evseries <- longrun.out$ev.longrun
+
+    ## Ensure parameter simulations in same order as model matrix
+    xnames <- colnames(x)
+    beta <- simparam[,xnames]
+
+    ## Extract AR and MA terms
+    ar <- i <- ma <- NULL
+    if(order[1]>0){
+        arnames <- paste("ar", 1:order[1], sep="")
+        ar <- simparam[,arnames]
+    }
+    if(order[3]>0){
+        manames <- paste("ma", 1:order[3], sep="")
+        ma <- simparam[,manames]
+    }
+    timepast <- max(order[1],order[3]) # How many steps backward are needed in the series  --  could we be more precise?
+
+    # Take a step at covariates x
+    for(i in 2:t1){
+        nextstep <- zeligARMAnextstep(yseries=yseries[1:timepast, ], xseries=x, wseries=wseries[1:timepast, ], beta=beta, ar=ar, i=i, ma=ma, sd=sd)
+        yseries  <- rbind(nextstep$y, yseries)   # Could just change arguments so nextstep(nextstep) doesn't need to copy elsewhere.
+        wseries  <- rbind(nextstep$w, wseries)
+        evseries <- rbind(nextstep$exp.y, evseries)
+    }
+
+    # Introduce shock
+    nextstep <- zeligARMAnextstep(yseries=yseries[1:timepast, ], xseries=x1, wseries=wseries[1:timepast, ], beta=beta, ar=ar, i=i, ma=ma, sd=sd)
+    yseries  <- rbind(nextstep$y, yseries)   # Could just change arguments so nextstep(nextstep) doesn't need to copy elsewhere.
+    wseries  <- rbind(nextstep$w, wseries)
+    evseries <- rbind(nextstep$exp.y, evseries)
+
+    y.innov  <- yseries
+    w.innov  <- wseries  # Note: sequence of stocastic terms are going to depart now
+    ev.innov <- evseries
+
+    for(i in 2:t2){
+        # Take further steps at covariates x1 (an introduction of an innovation)
+        nextstep <- zeligARMAnextstep(yseries=y.innov[1:timepast, ], xseries=x1, wseries=w.innov[1:timepast, ], beta=beta, ar=ar, i=i, ma=ma, sd=sd)
+        y.innov  <- rbind(nextstep$y, y.innov)  # Could just change arguments so nextstep(nextstep) doesn't need to copy elsewhere.
+        w.innov  <- rbind(nextstep$w, w.innov)
+        ev.innov <- rbind(nextstep$exp.y, ev.innov)
+
+        # And take steps returning to old covariates (an introduction of a shock)
+        nextstep <- zeligARMAnextstep(yseries=yseries[1:timepast, ], xseries=x, wseries=wseries[1:timepast, ], beta=beta, ar=ar, i=i, ma=ma, sd=sd)
+        yseries  <- rbind(nextstep$y, yseries)   # Could just change arguments so nextstep(nextstep) doesn't need to copy elsewhere.
+        wseries  <- rbind(nextstep$w, wseries)
+        evseries <- rbind(nextstep$exp.y, evseries)
+
+    }
+
+    yseries <- yseries[1:(t1 + t2), ]  # Truncate series to last periods, removing burn-in to equilibrium
+    y.innov <- y.innov[1:(t1 + t2), ]
+    evseries <- evseries[1:(t1 + t2), ]
+    ev.innov <- ev.innov[1:(t1 + t2), ]
+
+    yseries <- yseries[nrow(yseries):1,]  # Change y to conventional row ordering by time before returning
+    y.innov <- y.innov[nrow(y.innov):1,]
+    evseries <- evseries[nrow(evseries):1, ]
+    ev.innov <- ev.innov[nrow(ev.innov):1, ]
+
+    return(list(y.shock = yseries, y.innovation = y.innov, ev.shock = evseries, ev.innovation = ev.innov))
+}
diff --git a/R/model-bayes.R b/R/model-bayes.R
new file mode 100644
index 0000000..068e863
--- /dev/null
+++ b/R/model-bayes.R
@@ -0,0 +1,139 @@
+#' Bayes Model object for inheritance across models in Zelig
+#'
+#' @import methods
+#' @export Zelig-bayes
+#' @exportClass Zelig-bayes
+#'
+#' @include model-zelig.R
+zbayes <- setRefClass("Zelig-bayes",
+                      contains = "Zelig")
+
+zbayes$methods(
+  initialize = function() {
+    callSuper()
+    .self$packageauthors <- "Andrew D. Martin, Kevin M. Quinn, and Jong Hee Park"
+    .self$modelauthors <- "Ben Goodrich, and Ying Lu"
+  }
+)
+
+zbayes$methods(
+  zelig = function(formula, 
+                   burnin = 1000, mcmc = 10000, 
+                   verbose = 0, 
+                   ..., 
+                   data,
+                   by = NULL,
+                   bootstrap = FALSE) {
+    if(!identical(bootstrap,FALSE)){
+      stop("Error: The bootstrap is not available for Markov chain Monte Carlo (MCMC) models.")
+    }
+    .self$zelig.call <- match.call(expand.dots = TRUE)
+    .self$model.call <- .self$zelig.call
+    if (missing(verbose))
+      verbose <- round((mcmc + burnin) / 10)
+#     .self$model.call$family <- call(.self$family, .self$link)
+    .self$model.call$verbose <- verbose
+    .self$num <- mcmc # CC: check
+    callSuper(formula = formula, data = data, ..., by = by, bootstrap = FALSE)
+  }
+)
+
+zbayes$methods(
+  param = function(z.out) {
+    return(z.out)
+  }
+)
+
+zbayes$methods(
+  get_coef = function() {
+    "Get estimated model coefficients"
+    return(.self$zelig.out$z.out[[1]])
+  } 
+)
+
+zbayes$methods(
+  geweke.diag = function() {
+    diag <- lapply(.self$zelig.out$z.out, coda::geweke.diag)
+    # Collapse if only one list element for prettier printing
+    if(length(diag)==1){
+        diag<-diag[[1]]
+    }
+
+
+    if(!citation("coda") %in% .self$refs){
+      .self$refs<-c(.self$refs,citation("coda"))
+    }
+    ref1<-bibentry(
+            bibtype="InCollection",
+            title = "Evaluating the accuracy of sampling-based approaches to calculating posterior moments.",
+            booktitle = "Bayesian Statistics 4",
+            author = person("John", "Geweke"),
+            year = 1992,
+            publisher = "Clarendon Press",
+            address = "Oxford, UK",
+            editor = c(person("JM", "Bernado"), person("JO", "Berger"), person("AP", "Dawid"), person("AFM", "Smith")) 
+            )
+    .self$refs<-c(.self$refs,ref1)
+    return(diag)
+  } 
+)
+
+zbayes$methods(
+  heidel.diag = function() {
+    diag <- lapply(.self$zelig.out$z.out, coda::heidel.diag)
+    # Collapse if only one list element for prettier printing
+    if(length(diag)==1){
+        diag<-diag[[1]]
+    }
+
+
+    if(!citation("coda") %in% .self$refs){
+      .self$refs<-c(.self$refs,citation("coda"))
+    }
+    ref1<-bibentry(
+            bibtype="Article",
+            title = "Simulation run length control in the presence of an initial transient.",
+            author = c(person("P", "Heidelberger"), person("PD", "Welch")),
+            journal = "Operations Research",
+            volume = 31,
+            year = 1983,
+            pages = "1109--44")
+    .self$refs<-c(.self$refs,ref1)
+    return(diag)
+  } 
+)
+
+zbayes$methods(
+  raftery.diag = function() {
+    diag <- lapply(.self$zelig.out$z.out, coda::raftery.diag)
+    # Collapse if only one list element for prettier printing
+    if(length(diag)==1){
+        diag<-diag[[1]]
+    }
+
+
+    if(!citation("coda") %in% .self$refs){
+      .self$refs<-c(.self$refs,citation("coda"))
+    }
+    ref1<-bibentry(
+            bibtype="Article",
+            title = "One long run with diagnostics: Implementation strategies for Markov chain Monte Carlo.",
+            author = c(person("Adrian E", "Raftery"), person("Steven M", "Lewis")),
+            journal = "Statistical Science",
+            volume = 31,
+            year = 1992,
+            pages = "1109--44")
+    ref2<-bibentry(
+            bibtype="InCollection",
+            title = "The number of iterations, convergence diagnostics and generic Metropolis algorithms.",
+            booktitle = "Practical Markov Chain Monte Carlo",
+            author = c(person("Adrian E", "Raftery"), person("Steven M", "Lewis")),
+            year = 1995,
+            publisher = "Chapman and Hall",
+            address = "London, UK",
+            editor = c(person("WR", "Gilks"), person("DJ", "Spiegelhalter"), person("S", "Richardson")) 
+            )
+    .self$refs<-c(.self$refs,ref1,ref2)
+    return(diag)
+  } 
+)
diff --git a/R/model-bbinchoice.R b/R/model-bbinchoice.R
deleted file mode 100644
index 7ac47bc..0000000
--- a/R/model-bbinchoice.R
+++ /dev/null
@@ -1,155 +0,0 @@
-#' Bivariate Binary Choice object for inheritance across models in ZeligChoice
-#'
-#' @import methods
-#' @export Zelig-bbinchoice
-#' @exportClass Zelig-bbinchoice
-
-
-
-zbbinchoice <- setRefClass("Zelig-bbinchoice",
-                          contains = "Zelig",
-                          field = list(family = "ANY",
-                                       linkinv = "function"
-                          ))
-
-zbbinchoice$methods(
-  initialize = function() {
-    callSuper()
-    .self$fn <- quote(VGAM::vglm)
-    .self$authors <- "Kosuke Imai, Gary King, Olivia Lau"
-    .self$packageauthors <- "Thomas W. Yee"
-    .self$year <- 2007
-    .self$category <- "dichotomous"
-  }
-)
-
-zbbinchoice$methods(
-  zelig = function(formula, data, ..., weights = NULL, by = NULL, bootstrap = FALSE) {
-    .self$zelig.call <- match.call(expand.dots = TRUE)
-    .self$model.call <- match.call(expand.dots = TRUE)
-    .self$model.call$family <- .self$family
-    if (!is.null(weights)) 
-        message('Note: Zelig weight results may differ from those in VGAM::vglm.')
-    callSuper(formula = formula, data = data, ..., weights = weights, by = by, 
-              bootstrap = bootstrap)
-  }
-)
-
-zbbinchoice$methods(
-  param = function(z.out, method="mvn") {
-    if(identical(method,"mvn")){
-      return(mvrnorm(.self$num, coef(z.out), vcov(z.out))) 
-    } else if(identical(method,"point")){
-      return(t(as.matrix(coef(z.out))))
-    } else {
-      stop("param called with method argument of undefined type.")
-    }
-  }
-)
-
-zbbinchoice$methods(
-  # From Zelig 4
-  qi = function(simparam, mm) {
-    .pp <- function(object, constr, all.coef, x) {
-      xm <- list()
-      xm <- rep(list(NULL), 3)
-      sim.eta <- NULL
-      for (i in 1:length(constr))
-        for (j in 1:3)
-          if (sum(constr[[i]][j,]) == 1)
-            xm[[j]] <- c(xm[[j]], x[,names(constr)[i]])
-      sim.eta <- cbind(
-        all.coef[[1]] %*% as.matrix( xm[[1]] ),
-        all.coef[[2]] %*% as.matrix( xm[[2]] ),
-        all.coef[[3]] %*% as.matrix( xm[[3]] )
-      )
-      # compute inverse (theta)
-      ev <- .self$linkinv(sim.eta)
-      # assign correct column names
-      colnames(ev) <- c("Pr(Y1=0, Y2=0)",
-                        "Pr(Y1=0, Y2=1)",
-                        "Pr(Y1=1, Y2=0)",
-                        "Pr(Y1=1, Y2=1)"
-      )
-      return(ev)
-    }
-    
-    .pr <- function(ev) {
-      mpr <- cbind(ev[, 3] + ev[, 4], ev[, 2] + ev[, 4])
-      index <- matrix(NA, ncol=2, nrow=nrow(mpr))
-      index[, 1] <- rbinom(n=nrow(ev), size=1, prob=mpr[, 1])
-      index[, 2] <- rbinom(n=nrow(ev), size=1, prob=mpr[, 2])
-      pr <- matrix(NA, nrow=nrow(ev), ncol=4)
-      pr[, 1] <- as.integer(index[, 1] == 0 & index[, 2] == 0)
-      pr[, 2] <- as.integer(index[, 1] == 0 & index[, 2] == 1)
-      pr[, 3] <- as.integer(index[, 1] == 1 & index[, 2] == 0)
-      pr[, 4] <- as.integer(index[, 1] == 1 & index[, 2] == 1)
-      colnames(pr) <- c("(Y1=0, Y2=0)",
-                        "(Y1=0, Y2=1)",
-                        "(Y1=1, Y2=0)",
-                        "(Y1=1, Y2=1)")
-      return(pr)
-    }
-    .make.match.table <- function(index, cols=NULL) {
-      pr <- matrix(0, nrow=nrow(index), ncol=4)
-      # assigns values by the rule:
-      #   pr[j,1] = 1 iff index[j,1] == 0 && index[j,2] == 0
-      #   pr[j,2] = 1 iff index[j,1] == 0 && index[j,2] == 1
-      #   pr[j,3] = 1 iff index[j,1] == 1 && index[j,2] == 0
-      #   pr[j,4] = 1 iff index[j,1] == 1 && index[j,2] == 1
-      # NOTE: only one column can be true at a time, so as a result
-      #       we can do a much more elegant one liner, that I'll code
-      #       later.  In this current form, I don't think this actually
-      #       explains what is going on.
-      pr[, 1] <- as.integer(index[, 1] == 0 & index[, 2] == 0)
-      pr[, 2] <- as.integer(index[, 1] == 0 & index[, 2] == 1)
-      pr[, 3] <- as.integer(index[, 1] == 1 & index[, 2] == 0)
-      pr[, 4] <- as.integer(index[, 1] == 1 & index[, 2] == 1)
-      # assign column names
-      colnames(pr) <- if (is.character(cols) && length(cols)==4)
-        cols
-      else
-        c("(Y1=0, Y2=0)",
-          "(Y1=0, Y2=1)",
-          "(Y1=1, Y2=0)",
-          "(Y1=1, Y2=1)")
-      return(pr)
-    }
-    all.coef <- NULL
-    coefs <- simparam
-    cm <- constraints(.self$zelig.out$z.out[[1]])
-    v <- vector("list", 3)
-    for (i in 1:length(cm)) {
-      if (ncol(cm[[i]]) == 1){
-        for (j in 1:3)
-          if (sum(cm[[i]][j, ]) == 1)
-            v[[j]] <- c(v[[j]], names(cm)[i])
-      }
-      else {
-        for (j in 1:3)
-          if (sum(cm[[i]][j,]) == 1)
-            v[[j]] <- c(v[[j]], paste(names(cm)[i], ":", j, sep=""))
-      }
-    }
-    for(i in 1:3)
-      all.coef[[i]] <- coefs[ , unlist(v[i]) ]
-    col.names <- c("Pr(Y1=0, Y2=0)",
-                   "Pr(Y1=0, Y2=1)",
-                   "Pr(Y1=1, Y2=0)",
-                   "Pr(Y1=1, Y2=1)"
-    )
-    ev <- .pp(.self$zelig.out$z.out[[1]], cm, all.coef, as.matrix(mm))
-    pv <- .pr(ev)
-    levels(pv) <- c(0, 1)
-#     return(list("Predicted Probabilities: Pr(Y1=k|X)" = ev,
-#                 "Predicted Values: Y=k|X" = pv))
-    return(list(ev = ev, pv = pv))
-  }
-)
-
-# zbinchoice$methods(
-#   show = function() {
-#     lapply(.self$zelig.out, function(x) print(VGAM::summary(x)))
-#   }
-# )
-
diff --git a/R/model-binchoice-gee.R b/R/model-binchoice-gee.R
new file mode 100644
index 0000000..3f413f6
--- /dev/null
+++ b/R/model-binchoice-gee.R
@@ -0,0 +1,32 @@
+#' Object for Binary Choice outcomes in Generalized Estimating Equations 
+#' for inheritance across models in Zelig
+#'
+#' @import methods
+#' @export Zelig-binchoice-gee
+#' @exportClass Zelig-binchoice-gee
+#'
+#' @include model-zelig.R
+#' @include model-binchoice.R
+#' @include model-gee.R
+zbinchoicegee <- setRefClass("Zelig-binchoice-gee",
+                           contains = c("Zelig-gee",
+                                        "Zelig-binchoice"))
+
+zbinchoicegee$methods(
+  initialize = function() {
+    callSuper()
+    .self$family <- "binomial"
+    .self$year <- 2011
+    .self$category <- "continuous"
+    .self$authors <- "Patrick Lam"
+    .self$fn <- quote(geepack::geeglm)
+    # JSON from parent
+  }
+)
+
+zbinchoicegee$methods(
+  param = function(z.out, method="mvn") {
+    simparam.local <- callSuper(z.out, method=method)
+    return(simparam.local$simparam) # no ancillary parameter
+  }
+)
diff --git a/R/model-binchoice-survey.R b/R/model-binchoice-survey.R
new file mode 100644
index 0000000..bd9b6f1
--- /dev/null
+++ b/R/model-binchoice-survey.R
@@ -0,0 +1,23 @@
+#' Object for Binary Choice outcomes with Survey Weights
+#' for inheritance across models in Zelig
+#'
+#' @import methods
+#' @export Zelig-binchoice-survey
+#' @exportClass Zelig-binchoice-survey
+#'
+#' @include model-zelig.R
+#' @include model-binchoice.R
+#' @include model-survey.R
+zbinchoicesurvey <- setRefClass("Zelig-binchoice-survey",
+                           contains = c("Zelig-survey",
+                                        "Zelig-binchoice"))
+
+zbinchoicesurvey$methods(
+  initialize = function() {
+    callSuper()
+    .self$family <- "binomial"
+    .self$category <- "continuous"
+    # JSON from parent
+  }
+)
+
diff --git a/R/model-binchoice.R b/R/model-binchoice.R
new file mode 100755
index 0000000..d3ef68e
--- /dev/null
+++ b/R/model-binchoice.R
@@ -0,0 +1,38 @@
+#' Binary Choice object for inheritance across models in Zelig
+#'
+#' @import methods
+#' @export Zelig-binchoice
+#' @exportClass Zelig-binchoice
+#'
+#' @include model-zelig.R
+#' @include model-glm.R
+zbinchoice <- setRefClass("Zelig-binchoice",
+                          contains = "Zelig-glm")
+  
+zbinchoice$methods(
+  initialize = function() {
+    callSuper()
+    .self$authors <- "Kosuke Imai, Gary King, Olivia Lau"
+    .self$year <- 2007
+    .self$category <- "dichotomous"
+    .self$family <- "binomial"
+    # JSON
+    .self$outcome <- "binary"
+  }
+)
+
+zbinchoice$methods(
+  qi = function(simparam, mm) {
+    .self$linkinv <- eval(call(.self$family, .self$link))$linkinv
+    coeff <- simparam
+    eta <- simparam %*% t(mm)
+    eta <- Filter(function (y) !is.na(y), eta)
+    theta <- matrix(.self$linkinv(eta), nrow = nrow(coeff))
+    ev <- matrix(.self$linkinv(eta), ncol = ncol(theta))
+    pv <- matrix(nrow = nrow(ev), ncol = ncol(ev))
+    for (j in 1:ncol(ev))
+      pv[, j] <- rbinom(length(ev[, j]), 1, prob = ev[, j])
+    levels(pv) <- c(0, 1)
+    return(list(ev = ev, pv = pv))
+  }
+)
diff --git a/R/model-blogit.R b/R/model-blogit.R
deleted file mode 100644
index c3b2b56..0000000
--- a/R/model-blogit.R
+++ /dev/null
@@ -1,41 +0,0 @@
-#' Bivariate Logistic Regression for Two Dichotomous Dependent Variables
-#'
-#' Vignette: \url{http://docs.zeligproject.org/articles/zeligchoice_blogit.html}
-#' @import methods
-#' @export Zelig-blogit
-#' @exportClass Zelig-blogit
-#' 
-#' @include model-bbinchoice.R
-
-zblogit <- setRefClass("Zelig-blogit",
-                       contains = "Zelig-bbinchoice")
-
-zblogit$methods(
-  initialize = function() {
-    callSuper()
-    .self$name <- "blogit"
-    .self$description <- "Bivariate Logit Regression for Dichotomous Dependent Variables"
-    .self$family <- quote(binom2.or(zero = 3))
-    .self$linkinv <- binom2.or()@linkinv
-    .self$wrapper <- "blogit"
-    .self$vignette.url <- "http://docs.zeligproject.org/articles/zeligchoice_blogit.html"
-  }
-)
-
-zblogit$methods(
-  mcfun = function(x, b0=0, b1=1, b2=1, b3=0.5, ..., sim=TRUE){
-    n.sim = length(x)
-    pi1 <- 1/(1 + exp(b0 + b1 * x))
-    pi2 <- 1/(1 + exp(b2 + b3 * x))
-
-    if(sim){
-      y1 <- rbinom(n=n.sim, size=1, prob=pi1)
-      y2 <- rbinom(n=n.sim, size=1, prob=pi2)
-      return(as.data.frame(y1, y2, x))
-    }else{
-      y1.hat <- pi1
-      y2.hat <- pi2
-      return(as.data.frame(y1.hat, y2.hat, x))
-    }
-  }
-)
\ No newline at end of file
diff --git a/R/model-bprobit.R b/R/model-bprobit.R
deleted file mode 100644
index 165214c..0000000
--- a/R/model-bprobit.R
+++ /dev/null
@@ -1,41 +0,0 @@
-#' Bivariate Probit Regression for Two Dichotomous Dependent Variables
-#'
-#' Vignette: \url{http://docs.zeligproject.org/articles/zeligchoice_bprobit.html}
-#' @import methods
-#' @export Zelig-bprobit
-#' @exportClass Zelig-bprobit
-#' 
-#' @include model-bbinchoice.R
-
-zbprobit <- setRefClass("Zelig-bprobit",
-                        contains = "Zelig-bbinchoice")
-
-zbprobit$methods(
-  initialize = function() {
-    callSuper()
-    .self$name <- "bprobit"
-    .self$description <- "Bivariate Probit Regression for Dichotomous Dependent Variables"
-    .self$family <- quote(binom2.rho(zero = 3))
-    .self$linkinv <- binom2.rho()@linkinv
-    .self$wrapper <- "bprobit"
-    .self$vignette.url <- "http://docs.zeligproject.org/articles/zeligchoice_bprobit.html"
-  }
-)
-
-zbprobit$methods(
-  mcfun = function(x, b0=0, b1=1, b2=1, b3=0.5, ..., sim=TRUE){
-    n.sim = length(x)
-    pi1 <- pnorm(b0 + b1 * x)
-    pi2 <- pnorm(b2 + b3 * x)
-
-    if(sim){
-      y1 <- rbinom(n=n.sim, size=1, prob=pi1)
-      y2 <- rbinom(n=n.sim, size=1, prob=pi2)
-      return(as.data.frame(y1, y2, x))
-    }else{
-      y1.hat <- pi1
-      y2.hat <- pi2
-      return(as.data.frame(y1.hat, y2.hat, x))
-    }
-  }
-)
\ No newline at end of file
diff --git a/R/model-exp.R b/R/model-exp.R
new file mode 100755
index 0000000..5060e8c
--- /dev/null
+++ b/R/model-exp.R
@@ -0,0 +1,142 @@
+#' Exponential Regression for Duration Dependent Variables
+#'@param formula a symbolic representation of the model to be
+#'   estimated, in the form \code{y ~ x1 + x2}, where \code{y} is the
+#'   dependent variable and \code{x1} and \code{x2} are the explanatory
+#'   variables, and \code{y}, \code{x1}, and \code{x2} are contained in the
+#'   same dataset. (You may include more than two explanatory variables,
+#'   of course.) The \code{+} symbol means ``inclusion'' not
+#'   ``addition.'' You may also include interaction terms and main
+#'   effects in the form \code{x1*x2} without computing them in prior
+#'   steps; \code{I(x1*x2)} to include only the interaction term and
+#'   exclude the main effects; and quadratic terms in the form
+#'   \code{I(x1^2)}.
+#' @param model the name of a statistical model to estimate.
+#'   For a list of supported models and their documentation see:
+#'   \url{http://docs.zeligproject.org/articles/}.
+#' @param data the name of a data frame containing the variables
+#'   referenced in the formula or a list of multiply imputed data frames
+#'   each having the same variable names and row numbers (created by
+#'   \code{Amelia} or \code{\link{to_zelig_mi}}).
+#' @param ... additional arguments passed to \code{zelig},
+#'   relevant for the model to be estimated.
+#' @param by a factor variable contained in \code{data}. If supplied,
+#'   \code{zelig} will subset
+#'   the data frame based on the levels in the \code{by} variable, and
+#'   estimate a model for each subset. This can save a considerable amount of
+#'   effort. For example, to run the same model on all fifty states, you could
+#'   use: \code{z.out <- zelig(y ~ x1 + x2, data = mydata, model = 'ls',
+#'   by = 'state')} You may also use \code{by} to run models using MatchIt
+#'   subclasses.
+#' @param cite If is set to 'TRUE' (default), the model citation will be printed
+#'   to the console.
+#'
+#' @details
+#' Additional parameters avaialable to this model include:
+#' \itemize{
+#'   \item \code{weights}: vector of weight values or a name of a variable in the dataset
+#'   by which to weight the model. For more information see:
+#'   \url{http://docs.zeligproject.org/articles/weights.html}.
+#'   \item \code{bootstrap}: logical or numeric. If \code{FALSE} don't use bootstraps to
+#'   robustly estimate uncertainty around model parameters due to sampling error.
+#'   If an integer is supplied, the number of boostraps to run.
+#'   For more information see:
+#'   \url{http://docs.zeligproject.org/articles/bootstraps.html}.
+#' }
+#' @return Depending on the class of model selected, \code{zelig} will return
+#'   an object with elements including \code{coefficients}, \code{residuals},
+#'   and \code{formula} which may be summarized using
+#'   \code{summary(z.out)} or individually extracted using, for example,
+#'   \code{coef(z.out)}. See
+#'   \url{http://docs.zeligproject.org/articles/getters.html} for a list of
+#'   functions to extract model components. You can also extract whole fitted
+#'   model objects using \code{\link{from_zelig_model}}.
+#' @param robust defaults to FALSE. If TRUE, zelig() computes robust standard errors based on sandwich estimators and the options selected in cluster.
+#' @param if robust = TRUE, you may select a variable to define groups of correlated observations. Let x3 be a variable that consists of either discrete numeric values, character strings, or factors that define strata. Then
+#' z.out <- zelig(y ~ x1 + x2, robust = TRUE, cluster = "x3",model = "exp", data = mydata)
+#' means that the observations can be correlated within the strata defined by the variable x3, and that robust standard errors should be calculated according to those clusters. If robust = TRUE but cluster is not specified, zelig() assumes that each observation falls into its own cluster.
+#'
+#' @examples
+#' library(Zelig)
+#' data(coalition)
+#' library(survival)
+#' z.out <- zelig(Surv(duration, ciep12) ~ fract + numst2, model = "exp",
+#'                data = coalition)
+#' summary(z.out)
+#'
+#' @seealso Vignette: \url{http://docs.zeligproject.org/articles/zelig_exp.html}
+#' @import methods
+#' @export Zelig-exp
+#' @exportClass Zelig-exp
+#'
+#' @include model-zelig.R
+
+zexp <- setRefClass("Zelig-exp",
+                        contains = "Zelig",
+                        fields = list(simalpha = "list",
+                                      linkinv = "function"))
+
+zexp$methods(
+  initialize = function() {
+    callSuper()
+    .self$name <- "exp"
+    .self$authors <- "Olivia Lau, Kosuke Imai, Gary King"
+    .self$packageauthors <- "Terry M. Therneau, and Thomas Lumley"
+    .self$year <- 2011
+    .self$description <- "Exponential Regression for Duration Dependent Variables"
+    .self$fn <- quote(survival::survreg)
+    .self$linkinv <- survreg.distributions[["exponential"]]$itrans
+    # JSON
+    .self$outcome <- "continous"
+    .self$wrapper <- "exp"
+    .self$acceptweights <- TRUE
+  }
+)
+
+zexp$methods(
+  zelig = function(formula, ..., robust = FALSE, cluster = NULL, data,
+                   weights = NULL, by = NULL, bootstrap = FALSE) {
+
+    localFormula <- formula # avoids CRAN warning about deep assignment from formula existing separately as argument and field
+    .self$zelig.call <- match.call(expand.dots = TRUE)
+    .self$model.call <- .self$zelig.call
+    if (!(is.null(cluster) || robust))
+      stop("If cluster is specified, then `robust` must be TRUE")
+    # Add cluster term
+    if (robust || !is.null(cluster))
+      localFormula <- cluster.formula(localFormula, cluster)
+    .self$model.call$dist <- "exponential"
+    .self$model.call$model <- FALSE
+    callSuper(formula = localFormula, data = data, ..., robust = robust,
+              cluster = cluster,  weights = weights, by = by, bootstrap = bootstrap)
+    rse <- lapply(.self$zelig.out$z.out, (function(x) vcovHC(x, type = "HC0")))
+    .self$test.statistics <- list(robust.se = rse)
+  }
+)
+
+zexp$methods(
+  qi = function(simparam, mm) {
+    eta <- simparam %*% t(mm)
+    ev <- as.matrix(apply(eta, 2, linkinv))
+    pv <- as.matrix(rexp(length(ev), rate = 1 / ev))
+    return(list(ev = ev, pv = pv))
+  }
+)
+
+zexp$methods(
+  mcfun = function(x, b0=0, b1=1, alpha=1, sim=TRUE){
+    .self$mcformula <- as.Formula("Surv(y.sim, event) ~ x.sim")
+
+    lambda <-exp(b0 + b1 * x)
+    event <- rep(1, length(x))
+    y.sim <- rexp(n=length(x), rate=lambda)
+    y.hat <- 1/lambda
+
+    if(sim){
+        mydata <- data.frame(y.sim=y.sim, event=event, x.sim=x)
+        return(mydata)
+    }else{
+        mydata <- data.frame(y.hat=y.hat, event=event, x.seq=x)
+        return(mydata)
+    }
+  }
+)
diff --git a/R/model-factor-bayes.R b/R/model-factor-bayes.R
new file mode 100644
index 0000000..7432288
--- /dev/null
+++ b/R/model-factor-bayes.R
@@ -0,0 +1,269 @@
+#' Bayesian Factor Analysis
+#'
+#' @param formula a symbolic representation of the model to be
+#'   estimated, in the form \code{~ Y1 + Y2 + Y3}, where Y1, Y2, and Y3 are variables
+#'   of interest in factor analysis (manifest variables), assumed to be
+#'   normally distributed. The model requires a minimum of three manifest
+#'   variables contained in the
+#'   same dataset. The \code{+} symbol means ``inclusion'' not
+#'   ``addition.''
+#' @param factors number of the factors to be fitted (defaults to 2).
+#' @param model the name of a statistical model to estimate.
+#'   For a list of other supported models and their documentation see:
+#'   \url{http://docs.zeligproject.org/articles/}.
+#' @param data the name of a data frame containing the variables
+#'   referenced in the formula or a list of multiply imputed data frames
+#'   each having the same variable names and row numbers (created by
+#'   \code{Amelia} or \code{\link{to_zelig_mi}}).
+#' @param ... additional arguments passed to \code{zelig},
+#'   relevant for the model to be estimated.
+#' @param by a factor variable contained in \code{data}. If supplied,
+#'   \code{zelig} will subset
+#'   the data frame based on the levels in the \code{by} variable, and
+#'   estimate a model for each subset. This can save a considerable amount of
+#'   effort. You may also use \code{by} to run models using MatchIt
+#'   subclasses.
+#' @param cite If is set to 'TRUE' (default), the model citation will be printed
+#'   to the console.
+#'
+#' @details
+#' In addition, \code{zelig()} accepts the following additional arguments for model specification:
+#' \itemize{
+#'      \item \code{lambda.constraints}: list containing the equality or
+#'      inequality constraints on the factor loadings. Choose from one of the following forms:
+#'      \item \code{varname = list()}: by default, no constraints are imposed.
+#'      \item \code{varname = list(d, c)}: constrains the dth loading for the
+#'            variable named varname to be equal to c.
+#'      \item \code{varname = list(d, +)}: constrains the dth loading for the variable named varname to be positive;
+#'      \item \code{varname = list(d, -)}: constrains the dth loading for the variable named varname to be negative.
+#'      \item \code{std.var}: defaults to \code{FALSE} (manifest variables are rescaled to
+#'      zero mean, but retain observed variance). If \code{TRUE}, the manifest
+#'      variables are rescaled to be mean zero and unit variance.
+#' }
+#'
+#' In addition, \code{zelig()} accepts the following additional inputs for \code{bayes.factor}:
+#' \itemize{
+#'     \item \code{burnin}: number of the initial MCMC iterations to be discarded (defaults to 1,000).
+#'     \item \code{mcmc}: number of the MCMC iterations after burnin (defaults to 20,000).
+#'     \item \code{thin}: thinning interval for the Markov chain. Only every thin-th
+#'         draw from the Markov chain is kept. The value of mcmc must be divisible
+#'         by this value. The default value is 1.
+#'     \item \code{verbose}: defaults to FALSE. If TRUE, the
+#'     progress of the sampler (every 10%10%) is printed to the screen.
+#'     \item \code{seed}: seed for the random number generator. The default is NA which
+#'     corresponds to a random seed 12345.
+#'     \item \code{Lambda.start}: starting values of the factor loading matrix \eqn{\Lambda}, either a
+#'     scalar (all unconstrained loadings are set to that value), or a matrix with
+#'     compatible dimensions. The default is NA, where the start value are set to
+#'     be 0 for unconstrained factor loadings, and 0.5 or - 0.5 for constrained
+#'     factor loadings (depending on the nature of the constraints).
+#'     \item \code{Psi.start}: starting values for the uniquenesses, either a scalar
+#'     (the starting values for all diagonal elements of \eqn{\Psi} are set to be this value),
+#'     or a vector with length equal to the number of manifest variables. In the latter
+#'     case, the starting values of the diagonal elements of \eqn{\Psi} take the values of
+#'     Psi.start. The default value is NA where the starting values of the all the
+#'     uniquenesses are set to be 0.5.
+#'     \item \code{store.lambda}: defaults to TRUE, which stores the posterior draws of the factor loadings.
+#'     \item \code{store.scores}: defaults to FALSE. If TRUE, stores the posterior draws of the
+#'     factor scores. (Storing factor scores may take large amount of memory for a large
+#'     number of draws or observations.)
+#' }
+#'
+#' The model also accepts the following additional arguments to specify prior parameters:
+#' \itemize{
+#'     \item \code{l0}: mean of the Normal prior for the factor loadings, either a scalar or a
+#'     matrix with the same dimensions as \eqn{\Lambda}. If a scalar value, that value will be the
+#'     prior mean for all the factor loadings. Defaults to 0.
+#'     \item \code{L0}: precision parameter of the Normal prior for the factor loadings, either
+#'     a scalar or a matrix with the same dimensions as \eqn{\Lambda}. If \code{L0} takes a scalar value,
+#'     then the precision matrix will be a diagonal matrix with the diagonal elements
+#'     set to that value. The default value is 0, which leads to an improper prior.
+#'     \item \code{a0}: the shape parameter of the Inverse Gamma prior for the uniquenesses
+#'     is \code{a0}/2. It can take a scalar value or a vector. The default value is 0.001.
+#'     \item \code{b0}: the scale parameter of the Inverse Gamma prior for the uniquenesses
+#'     is \code{b0}/2. It can take a scalar value or a vector. The default value is 0.001.
+#' }
+#'
+#' Additional parameters avaialable to this model include:
+#' \itemize{
+#'   \item \code{weights}: vector of weight values or a name of a variable in the dataset
+#'   by which to weight the model. For more information see:
+#'   \url{http://docs.zeligproject.org/articles/weights.html}.
+#' }
+#'
+#' @return Depending on the class of model selected, \code{zelig} will return
+#'   an object with elements including \code{coefficients}, \code{residuals},
+#'   and \code{formula} which may be summarized using
+#'   \code{summary(z.out)} or individually extracted using, for example,
+#'   \code{coef(z.out)}. See
+#'   \url{http://docs.zeligproject.org/articles/getters.html} for a list of
+#'   functions to extract model components. You can also extract whole fitted
+#'   model objects using \code{\link{from_zelig_model}}.
+#' @examples
+#' \dontrun{
+#' data(swiss)
+#' names(swiss) <- c("Fert", "Agr", "Exam", "Educ", "Cath", "InfMort")
+#' z.out <- zelig(~ Agr + Exam + Educ + Cath + InfMort,
+#' model = "factor.bayes", data = swiss,
+#' factors = 2, verbose = FALSE,
+#' a0 = 1, b0 = 0.15, burnin = 500, mcmc = 5000)
+#'
+#' z.out$geweke.diag()
+#' z.out <- zelig(~ Agr + Exam + Educ + Cath + InfMort,
+#' model = "factor.bayes", data = swiss, factors = 2,
+#' lambda.constraints =
+#'    list(Exam = list(1,"+"),
+#'         Exam = list(2,"-"),
+#'         Educ = c(2, 0),
+#'         InfMort = c(1, 0)),
+#' verbose = FALSE, a0 = 1, b0 = 0.15,
+#' burnin = 500, mcmc = 5000)
+#'
+#' z.out$geweke.diag()
+#' z.out$heidel.diag()
+#' z.out$raftery.diag()
+#' summary(z.out)
+#' }
+#'
+#' @seealso Vignette: \url{http://docs.zeligproject.org/articles/zelig_factorbayes.html}
+#' @import methods
+#' @export Zelig-factor-bayes
+#' @exportClass Zelig-factor-bayes
+#'
+#' @include model-zelig.R
+
+zfactorbayes <- setRefClass("Zelig-factor-bayes",
+                            contains = c("Zelig"))
+
+zfactorbayes$methods(
+  initialize = function() {
+    callSuper()
+    .self$name <- "factor-bayes"
+    .self$year <- 2013
+    .self$authors <- "Ben Goodrich, Ying Lu"
+    .self$packageauthors <- "Andrew D. Martin, Kevin M. Quinn, and Jong Hee Park"
+    .self$description = "Bayesian Factor Analysis"
+    .self$fn <- quote(MCMCpack::MCMCfactanal)
+    # JSON from parent
+    .self$wrapper <- "factor.bayes"
+  }
+)
+
+zfactorbayes$methods(
+  zelig = function(formula,
+                   factors = 2,
+                   burnin = 1000, mcmc = 20000,
+                   verbose = 0,
+                   ...,
+                   data,
+                   by = NULL,
+                   bootstrap = FALSE) {
+    if(!identical(bootstrap,FALSE)){
+      stop("Error: The bootstrap is not available for Markov chain Monte Carlo (MCMC) models.")
+    }
+    .self$zelig.call <- match.call(expand.dots = TRUE)
+    .self$model.call <- .self$zelig.call
+    if (missing(verbose))
+      verbose <- round((mcmc + burnin) / 10)
+    if (factors < 2)
+      stop("Number of factors needs to be at least 2")
+    .self$model.call$verbose <- verbose
+    .self$model.call$x <- formula
+    .self$model.call$factors <- factors
+    callSuper(formula = formula, data = data,..., by = by, bootstrap = FALSE)
+  }
+)
+
+zfactorbayes$methods(
+  qi = function() {
+    return(NULL)
+  }
+)
+
+# The following diagnostics are also in Zelig-bayes, which unfortunately Zelig-factor-bayes does not currently inherit.
+zfactorbayes$methods(
+  geweke.diag = function() {
+    diag <- lapply(.self$zelig.out$z.out, coda::geweke.diag)
+    # Collapse if only one list element for prettier printing
+    if(length(diag)==1){
+        diag<-diag[[1]]
+    }
+
+
+    if(!citation("coda") %in% .self$refs){
+      .self$refs<-c(.self$refs,citation("coda"))
+    }
+    ref1<-bibentry(
+            bibtype="InCollection",
+            title = "Evaluating the accuracy of sampling-based approaches to calculating posterior moments.",
+            booktitle = "Bayesian Statistics 4",
+            author = person("John", "Geweke"),
+            year = 1992,
+            publisher = "Clarendon Press",
+            address = "Oxford, UK",
+            editor = c(person("JM", "Bernado"), person("JO", "Berger"), person("AP", "Dawid"), person("AFM", "Smith"))
+            )
+    .self$refs<-c(.self$refs,ref1)
+    return(diag)
+  }
+)
+
+zfactorbayes$methods(
+  heidel.diag = function() {
+    diag <- lapply(.self$zelig.out$z.out, coda::heidel.diag)
+    # Collapse if only one list element for prettier printing
+    if(length(diag)==1){
+        diag<-diag[[1]]
+    }
+
+
+    if(!citation("coda") %in% .self$refs){
+      .self$refs<-c(.self$refs,citation("coda"))
+    }
+    ref1<-bibentry(
+            bibtype="Article",
+            title = "Simulation run length control in the presence of an initial transient.",
+            author = c(person("P", "Heidelberger"), person("PD", "Welch")),
+            journal = "Operations Research",
+            volume = 31,
+            year = 1983,
+            pages = "1109--44")
+    .self$refs<-c(.self$refs,ref1)
+    return(diag)
+  }
+)
+
+zfactorbayes$methods(
+  raftery.diag = function() {
+    diag <- lapply(.self$zelig.out$z.out, coda::raftery.diag)
+    # Collapse if only one list element for prettier printing
+    if(length(diag)==1){
+        diag<-diag[[1]]
+    }
+
+
+    if(!citation("coda") %in% .self$refs){
+      .self$refs<-c(.self$refs,citation("coda"))
+    }
+    ref1<-bibentry(
+            bibtype="Article",
+            title = "One long run with diagnostics: Implementation strategies for Markov chain Monte Carlo.",
+            author = c(person("Adrian E", "Raftery"), person("Steven M", "Lewis")),
+            journal = "Statistical Science",
+            volume = 31,
+            year = 1992,
+            pages = "1109--44")
+    ref2<-bibentry(
+            bibtype="InCollection",
+            title = "The number of iterations, convergence diagnostics and generic Metropolis algorithms.",
+            booktitle = "Practical Markov Chain Monte Carlo",
+            author = c(person("Adrian E", "Raftery"), person("Steven M", "Lewis")),
+            year = 1995,
+            publisher = "Chapman and Hall",
+            address = "London, UK",
+            editor = c(person("WR", "Gilks"), person("DJ", "Spiegelhalter"), person("S", "Richardson"))
+            )
+    .self$refs<-c(.self$refs,ref1,ref2)
+    return(diag)
+  }
+)
diff --git a/R/model-gamma-gee.R b/R/model-gamma-gee.R
new file mode 100755
index 0000000..66311c0
--- /dev/null
+++ b/R/model-gamma-gee.R
@@ -0,0 +1,96 @@
+#' Generalized Estimating Equation for Gamma Regression
+#' @param formula a symbolic representation of the model to be
+#'   estimated, in the form \code{y ~ x1 + x2}, where \code{y} is the
+#'   dependent variable and \code{x1} and \code{x2} are the explanatory
+#'   variables, and \code{y}, \code{x1}, and \code{x2} are contained in the
+#'   same dataset. (You may include more than two explanatory variables,
+#'   of course.) The \code{+} symbol means ``inclusion'' not
+#'   ``addition.'' You may also include interaction terms and main
+#'   effects in the form \code{x1*x2} without computing them in prior
+#'   steps; \code{I(x1*x2)} to include only the interaction term and
+#'   exclude the main effects; and quadratic terms in the form
+#'   \code{I(x1^2)}.
+#'@param model the name of a statistical model to estimate.
+#'   For a list of other supported models and their documentation see:
+#'   \url{http://docs.zeligproject.org/articles/}.
+#'@param data the name of a data frame containing the variables
+#'   referenced in the formula or a list of multiply imputed data frames
+#'   each having the same variable names and row numbers (created by
+#'   \code{Amelia} or \code{\link{to_zelig_mi}}).
+#'@param ... additional arguments passed to \code{zelig},
+#'   relevant for the model to be estimated.
+#'@param by a factor variable contained in \code{data}. If supplied,
+#'   \code{zelig} will subset
+#'   the data frame based on the levels in the \code{by} variable, and
+#'   estimate a model for each subset. This can save a considerable amount of
+#'   effort. You may also use \code{by} to run models using MatchIt
+#'   subclasses.
+#'@param cite If is set to 'TRUE' (default), the model citation will be printed
+#'   to the console.
+#'@param corstr:character string specifying the correlation structure: "independence",
+#'     "exchangeable", "ar1", "unstructured" and "userdefined"
+#'@param See geeglm in package geepack for other function arguments.
+#'@param id: where id is a variable which identifies the clusters. The data should be sorted
+#'by id and should be ordered within each cluster when appropriate
+#'@param corstr: character string specifying the correlation structure: "independence",
+#'  "exchangeable", "ar1", "unstructured" and "userdefined"
+#'@param geeglm: See geeglm in package geepack for other function arguments
+#'
+#' @details
+#' Additional parameters avaialable to this model include:
+#' \itemize{
+#'   \item \code{weights}: vector of weight values or a name of a variable in the dataset
+#'   by which to weight the model. For more information see:
+#'   \url{http://docs.zeligproject.org/articles/weights.html}.
+#'   \item \code{bootstrap}: logical or numeric. If \code{FALSE} don't use bootstraps to
+#'   robustly estimate uncertainty around model parameters due to sampling error.
+#'   If an integer is supplied, the number of boostraps to run.
+#'   For more information see:
+#'   \url{http://docs.zeligproject.org/articles/bootstraps.html}.
+#' }
+#' @return Depending on the class of model selected, \code{zelig} will return
+#'   an object with elements including \code{coefficients}, \code{residuals},
+#'   and \code{formula} which may be summarized using
+#'   \code{summary(z.out)} or individually extracted using, for example,
+#'   \code{coef(z.out)}. See
+#'   \url{http://docs.zeligproject.org/articles/getters.html} for a list of
+#'   functions to extract model components. You can also extract whole fitted
+#'   model objects using \code{\link{from_zelig_model}}.
+#'
+#'@examples
+#' library(Zelig)
+#' data(coalition)
+#' coalition$cluster <- c(rep(c(1:62), 5),rep(c(63), 4))
+#' sorted.coalition <- coalition[order(coalition$cluster),]
+#' z.out <- zelig(duration ~ fract + numst2, model = "gamma.gee",id = "cluster",
+#'                data = sorted.coalition,corstr = "exchangeable")
+#' summary(z.out)
+#'
+#' @seealso Vignette: \url{http://docs.zeligproject.org/articles/zelig_gammagee.html}
+#' @import methods
+#' @export Zelig-gamma
+#' @exportClass Zelig-gamma
+#'
+#' @include model-zelig.R
+#' @include model-gee.R
+#' @include model-gamma.R
+
+zgammagee <- setRefClass("Zelig-gamma-gee",
+                           contains = c("Zelig-gee", "Zelig-gamma"))
+
+zgammagee$methods(
+  initialize = function() {
+    callSuper()
+    .self$name <- "gamma-gee"
+    .self$family <- "Gamma"
+    .self$link <- "inverse"
+    .self$linkinv <- eval(call(.self$family, .self$link))$linkinv
+    .self$year <- 2011
+    .self$category <- "continuous"
+    .self$authors <- "Patrick Lam"
+    .self$description = "General Estimating Equation for Gamma Regression"
+    .self$fn <- quote(geepack::geeglm)
+    # JSON from parent
+    .self$wrapper <- "gamma.gee"
+  }
+)
diff --git a/R/model-gamma-survey.R b/R/model-gamma-survey.R
new file mode 100755
index 0000000..79ef715
--- /dev/null
+++ b/R/model-gamma-survey.R
@@ -0,0 +1,109 @@
+#' Gamma Regression with Survey Weights
+#'
+#'@param formula a symbolic representation of the model to be
+#'   estimated, in the form \code{y ~ x1 + x2}, where \code{y} is the
+#'   dependent variable and \code{x1} and \code{x2} are the explanatory
+#'   variables, and \code{y}, \code{x1}, and \code{x2} are contained in the
+#'   same dataset. (You may include more than two explanatory variables,
+#'   of course.) The \code{+} symbol means ``inclusion'' not
+#'   ``addition.'' You may also include interaction terms and main
+#'   effects in the form \code{x1*x2} without computing them in prior
+#'   steps; \code{I(x1*x2)} to include only the interaction term and
+#'   exclude the main effects; and quadratic terms in the form
+#'   \code{I(x1^2)}.
+#'@param model the name of a statistical model to estimate.
+#'   For a list of other supported models and their documentation see:
+#'   \url{http://docs.zeligproject.org/articles/}.
+#'@param data the name of a data frame containing the variables
+#'   referenced in the formula or a list of multiply imputed data frames
+#'   each having the same variable names and row numbers (created by
+#'   \code{Amelia} or \code{\link{to_zelig_mi}}).
+#'@param ... additional arguments passed to \code{zelig},
+#'   relevant for the model to be estimated.
+#'@param by a factor variable contained in \code{data}. If supplied,
+#'   \code{zelig} will subset
+#'   the data frame based on the levels in the \code{by} variable, and
+#'   estimate a model for each subset. This can save a considerable amount of
+#'   effort. You may also use \code{by} to run models using MatchIt
+#'   subclasses.
+#'@param cite If is set to 'TRUE' (default), the model citation will be printed
+#'   to the console.
+#'
+#' @details
+#' Additional parameters avaialable to this model include:
+#' \itemize{
+#'   \item \code{weights}: vector of weight values or a name of a variable in the dataset
+#'   by which to weight the model. For more information see:
+#'   \url{http://docs.zeligproject.org/articles/weights.html}.
+#'   \item \code{bootstrap}: logical or numeric. If \code{FALSE} don't use bootstraps to
+#'   robustly estimate uncertainty around model parameters due to sampling error.
+#'   If an integer is supplied, the number of boostraps to run.
+#'   For more information see:
+#'   \url{http://docs.zeligproject.org/articles/bootstraps.html}.
+#' }
+#' @return Depending on the class of model selected, \code{zelig} will return
+#'   an object with elements including \code{coefficients}, \code{residuals},
+#'   and \code{formula} which may be summarized using
+#'   \code{summary(z.out)} or individually extracted using, for example,
+#'   \code{coef(z.out)}. See
+#'   \url{http://docs.zeligproject.org/articles/getters.html} for a list of
+#'   functions to extract model components. You can also extract whole fitted
+#'   model objects using \code{\link{from_zelig_model}}.
+#'@examples
+#' library(Zelig)
+#' data(api, package="survey")
+#' z.out1 <- zelig(api00 ~ meals + yr.rnd, model = "gamma.survey",
+#' weights = ~pw, data = apistrat)
+#' summary(z.out1)
+#'
+#' @seealso Vignette: \url{http://docs.zeligproject.org/articles/zelig_gammasurvey.html}
+#' @import methods
+#' @export Zelig-gamma
+#' @exportClass Zelig-gamma
+#'
+#' @include model-zelig.R
+#' @include model-survey.R
+#' @include model-gamma.R
+
+zgammasurvey <- setRefClass("Zelig-gamma-survey",
+                           contains = c("Zelig-survey", "Zelig-gamma"))
+
+zgammasurvey$methods(
+  initialize = function() {
+    callSuper()
+    .self$name <- "gamma-survey"
+    .self$family <- "Gamma"
+    .self$link <- "inverse"
+    .self$linkinv <- eval(call(.self$family, .self$link))$linkinv
+    .self$category <- "continuous"
+    .self$description = "Gamma Regression with Survey Weights"
+    # JSON from parent
+    .self$wrapper <- "gamma.survey"
+  }
+)
+
+zgammasurvey$methods(
+  param = function(z.out, method="mvn") {
+    shape <- MASS::gamma.shape(z.out)
+    if(identical(method,"mvn")){
+      simalpha <- rnorm(n = .self$num, mean = shape$alpha, sd = shape$SE)
+      simparam.local <- mvrnorm(n = .self$num, mu = coef(z.out), Sigma = vcov(z.out))
+      simparam.local <- list(simparam = simparam.local, simalpha = simalpha)
+      return(simparam.local)
+    } else if(identical(method,"point")){
+      return(list(simparam = t(as.matrix(coef(z.out))), simalpha = shape$alpha))
+    }
+  }
+)
+
+zgammasurvey$methods(
+  mcfun = function(x, b0=0, b1=1, alpha=1, sim=TRUE){
+    lambda <- 1/(b0 + b1 * x)
+    if(sim){
+        y <- rgamma(n=length(x), shape=alpha, scale = lambda)
+        return(y)
+    }else{
+        return(alpha * lambda)
+    }
+  }
+)
diff --git a/R/model-gamma.R b/R/model-gamma.R
new file mode 100755
index 0000000..4e2ebd7
--- /dev/null
+++ b/R/model-gamma.R
@@ -0,0 +1,119 @@
+#' Gamma Regression for Continuous, Positive Dependent Variables
+#'@param formula a symbolic representation of the model to be
+#'   estimated, in the form \code{y ~ x1 + x2}, where \code{y} is the
+#'   dependent variable and \code{x1} and \code{x2} are the explanatory
+#'   variables, and \code{y}, \code{x1}, and \code{x2} are contained in the
+#'   same dataset. (You may include more than two explanatory variables,
+#'   of course.) The \code{+} symbol means ``inclusion'' not
+#'   ``addition.'' You may also include interaction terms and main
+#'   effects in the form \code{x1*x2} without computing them in prior
+#'   steps; \code{I(x1*x2)} to include only the interaction term and
+#'   exclude the main effects; and quadratic terms in the form
+#'   \code{I(x1^2)}.
+#'@param model the name of a statistical model to estimate.
+#'   For a list of other supported models and their documentation see:
+#'   \url{http://docs.zeligproject.org/articles/}.
+#'@param data the name of a data frame containing the variables
+#'   referenced in the formula or a list of multiply imputed data frames
+#'   each having the same variable names and row numbers (created by
+#'   \code{Amelia} or \code{\link{to_zelig_mi}}).
+#'@param ... additional arguments passed to \code{zelig},
+#'   relevant for the model to be estimated.
+#'@param by a factor variable contained in \code{data}. If supplied,
+#'   \code{zelig} will subset
+#'   the data frame based on the levels in the \code{by} variable, and
+#'   estimate a model for each subset. This can save a considerable amount of
+#'   effort. You may also use \code{by} to run models using MatchIt
+#'   subclasses.
+#'@param cite If is set to 'TRUE' (default), the model citation will be printed
+#'   to the console.
+#'
+#' @details
+#' Additional parameters avaialable to this model include:
+#' \itemize{
+#'   \item \code{weights}: vector of weight values or a name of a variable in the dataset
+#'   by which to weight the model. For more information see:
+#'   \url{http://docs.zeligproject.org/articles/weights.html}.
+#'   \item \code{bootstrap}: logical or numeric. If \code{FALSE} don't use bootstraps to
+#'   robustly estimate uncertainty around model parameters due to sampling error.
+#'   If an integer is supplied, the number of boostraps to run.
+#'   For more information see:
+#'   \url{http://docs.zeligproject.org/articles/bootstraps.html}.
+#' }
+#' @return Depending on the class of model selected, \code{zelig} will return
+#'   an object with elements including \code{coefficients}, \code{residuals},
+#'   and \code{formula} which may be summarized using
+#'   \code{summary(z.out)} or individually extracted using, for example,
+#'   \code{coef(z.out)}. See
+#'   \url{http://docs.zeligproject.org/articles/getters.html} for a list of
+#'   functions to extract model components. You can also extract whole fitted
+#'   model objects using \code{\link{from_zelig_model}}.
+#'
+#' @examples
+#' library(Zelig)
+#' data(coalition)
+#' z.out <- zelig(duration ~ fract + numst2, model = "gamma", data = coalition)
+#' summary(z.out)
+#'
+#' @seealso Vignette: \url{http://docs.zeligproject.org/articles/zelig_gamma.html}
+#' @import methods
+#' @export Zelig-gamma
+#' @exportClass Zelig-gamma
+#'
+#' @include model-zelig.R
+#' @include model-glm.R
+zgamma <- setRefClass("Zelig-gamma",
+                      contains = "Zelig-glm")
+
+zgamma$methods(
+  initialize = function() {
+    callSuper()
+    .self$name <- "gamma"
+    .self$family <- "Gamma"
+    .self$link <- "inverse"
+    .self$authors <- "Kosuke Imai, Gary King, Olivia Lau"
+    .self$year <- 2007
+    .self$category <- "bounded"
+    .self$description <- "Gamma Regression for Continuous, Positive Dependent Variables"
+    # JSON
+    .self$outcome <- "continous"
+    .self$wrapper <- "gamma"
+  }
+)
+
+zgamma$methods(
+  param = function(z.out, method="mvn") {
+    shape <- MASS::gamma.shape(z.out)
+    if(identical(method, "mvn")){
+      simalpha <- rnorm(n = .self$num, mean = shape$alpha, sd = shape$SE)
+      simparam.local <- mvrnorm(n = .self$num, mu = coef(z.out), Sigma = vcov(z.out))
+      simparam.local <- list(simparam = simparam.local, simalpha = simalpha)
+      return(simparam.local)
+    } else if(identical(method,"point")){
+      return(list(simparam = t(as.matrix(coef(z.out))), simalpha = shape$alpha ))
+    }
+  }
+)
+
+zgamma$methods(
+  qi = function(simparam, mm) {
+    coeff <- simparam$simparam
+    eta <- (coeff %*% t(mm) ) * simparam$simalpha  # JH need to better understand this parameterization.  Coefs appear parameterized so E(y_i) = 1/ (x_i\hat{\beta})
+    theta <- matrix(1 / eta, nrow = nrow(coeff), ncol=1)
+    ev <- theta * simparam$simalpha
+    pv<- matrix(rgamma(nrow(ev), shape = simparam$simalpha, scale = theta), nrow=nrow(ev), ncol=1)
+    return(list(ev = ev, pv = pv))
+  }
+)
+
+zgamma$methods(
+  mcfun = function(x, b0=0, b1=1, alpha=1, sim=TRUE){
+    lambda <- 1/(b0 + b1 * x)
+    if(sim){
+        y <- rgamma(n=length(x), shape=alpha, scale = lambda)
+        return(y)
+    }else{
+        return(alpha * lambda)
+    }
+  }
+)
diff --git a/R/model-gee.R b/R/model-gee.R
new file mode 100755
index 0000000..d1f2180
--- /dev/null
+++ b/R/model-gee.R
@@ -0,0 +1,76 @@
+#' Generalized Estimating Equations Model object for inheritance across models in Zelig
+#'
+#' @import methods
+#' @export Zelig-gee
+#' @exportClass Zelig-gee
+#'
+#' @include model-zelig.R
+
+zgee <- setRefClass("Zelig-gee",
+                    contains = "Zelig")
+
+zgee$methods(
+  initialize = function() {
+    callSuper()
+    .self$packageauthors <- "Soren Hojsgaard, Ulrich Halekoh, and Jun Yan"
+    .self$modelauthors <- "Patrick Lam"
+    .self$acceptweights <- TRUE
+  }
+)
+
+
+zgee$methods(
+  zelig = function(formula, id, ..., zcor = NULL, corstr = "independence", data, weights = NULL, by = NULL, bootstrap = FALSE) {
+
+    localData <- data # avoids CRAN warning about deep assignment from formula existing separately as argument and field
+    .self$zelig.call <- match.call(expand.dots = TRUE)
+    .self$model.call <- .self$zelig.call
+    if (corstr == "fixed" && is.null(zcor))
+      stop("R must be defined")
+    # if id is a valid column-name in data, then we just need to extract the
+    # column and re-order the data.frame and cluster information
+    if (is.character(id) && length(id) == 1 && id %in% colnames(localData)) {
+      id <- localData[, id]
+      localData <- localData[order(id), ]
+      id <- sort(id)
+    }
+    .self$model.call$family <- call(.self$family, .self$link)
+    .self$model.call$id <- id
+    .self$model.call$zcor <- zcor
+    .self$model.call$corstr <- corstr
+    callSuper(formula = formula, data = localData, ..., weights = weights, by = by, bootstrap = bootstrap)
+    # Prettify summary display without modifying .self$model.call
+    for (i in length(.self$zelig.out$z.out)) {
+      .self$zelig.out$z.out[[i]]$call$id <- .self$zelig.call$id
+      .self$zelig.out$z.out[[i]]$call$zcor <- "zcor"
+    }
+  }
+)
+   
+zgee$methods(
+  param = function(z.out, method="mvn") {
+    so <- summary(z.out)
+    shape <- so$dispersion
+    if(identical(method,"point")){
+      return( list(simparam = t(as.matrix(coef(z.out))), simalpha = shape[1][1] ))
+    }else if(identical(method,"mvn")){
+      simalpha <- rnorm(n = .self$num,
+                      mean = shape[1][[1]],
+                      sd = shape[2][[1]])
+      simparam.local <- mvrnorm(n = .self$num,
+                        mu = coef(z.out),
+                        Sigma = so$cov.unscaled)
+      simparam.local <- list(simparam = simparam.local, simalpha = simalpha)
+      return(simparam.local)
+    }
+  }
+)
+
+# zgee$methods(
+#   show = function() {
+#     for (i in length(.self$zelig.out$z.out)) {
+#       .self$zelig.out$z.out[[i]]$call$id <- "id"
+#     }
+#     callSuper()
+#   }
+# )
diff --git a/R/model-glm.R b/R/model-glm.R
new file mode 100755
index 0000000..2e04f53
--- /dev/null
+++ b/R/model-glm.R
@@ -0,0 +1,33 @@
+#' Generalized Linear Model object for inheritance across models in Zelig
+#'
+#' @import methods
+#' @export Zelig-glm
+#' @exportClass Zelig-glm
+#'
+#' @include model-zelig.R
+
+zglm <- setRefClass("Zelig-glm",
+                    contains = "Zelig",
+                    fields = list(family = "character",
+                                  link = "character",
+                                  linkinv = "function"))
+
+zglm$methods(
+  initialize = function() {
+    callSuper()
+    .self$fn <- quote(stats::glm)
+    .self$packageauthors <- "R Core Team"
+    .self$acceptweights <- FALSE # "Why glm refers to the number of trials as weight is a trick question to the developers' conscience."
+  }
+)
+
+zglm$methods(
+  zelig = function(formula, data, ..., weights = NULL, by = NULL, bootstrap = FALSE) {
+    .self$zelig.call <- match.call(expand.dots = TRUE)
+    .self$model.call <- .self$zelig.call
+    .self$model.call$family <- call(.self$family, .self$link)
+    callSuper(formula = formula, data = data, ..., weights = weights, by = by, bootstrap = bootstrap)
+    rse <- lapply(.self$zelig.out$z.out, (function(x) vcovHC(x, type = "HC0")))
+    .self$test.statistics <- list(robust.se = rse)
+  }
+)
diff --git a/R/model-ivreg.R b/R/model-ivreg.R
new file mode 100644
index 0000000..1148c5d
--- /dev/null
+++ b/R/model-ivreg.R
@@ -0,0 +1,198 @@
+#' Instrumental-Variable Regression
+#'@param formula a symbolic representation of the model to be
+#'   estimated, in the form \code{y ~ x1 + x2}, where \code{y} is the
+#'   dependent variable and \code{x1} and \code{x2} are the explanatory
+#'   variables, and \code{y}, \code{x1}, and \code{x2} are contained in the
+#'   same dataset. (You may include more than two explanatory variables,
+#'   of course.) The \code{+} symbol means ``inclusion'' not
+#'   ``addition.'' You may also include interaction terms and main
+#'   effects in the form \code{x1*x2} without computing them in prior
+#'   steps; \code{I(x1*x2)} to include only the interaction term and
+#'   exclude the main effects; and quadratic terms in the form
+#'   \code{I(x1^2)}.
+#'@param model the name of a statistical model to estimate.
+#'   For a list of other supported models and their documentation see:
+#'   \url{http://docs.zeligproject.org/articles/}.
+#'@param data the name of a data frame containing the variables
+#'   referenced in the formula or a list of multiply imputed data frames
+#'   each having the same variable names and row numbers (created by
+#'   \code{Amelia} or \code{\link{to_zelig_mi}}).
+#'@param ... additional arguments passed to \code{zelig},
+#'   relevant for the model to be estimated.
+#'@param by a factor variable contained in \code{data}. If supplied,
+#'   \code{zelig} will subset
+#'   the data frame based on the levels in the \code{by} variable, and
+#'   estimate a model for each subset. This can save a considerable amount of
+#'   effort. You may also use \code{by} to run models using MatchIt
+#'   subclasses.
+#'@param cite If is set to 'TRUE' (default), the model citation will be printed
+#'   to the console.
+#'
+#'@details
+#' Additional parameters avaialable to many models include:
+#' \itemize{
+#'   \item weights: vector of weight values or a name of a variable in the dataset
+#'   by which to weight the model. For more information see:
+#'   \url{http://docs.zeligproject.org/articles/weights.html}.
+#'   \item bootstrap: logical or numeric. If \code{FALSE} don't use bootstraps to
+#'   robustly estimate uncertainty around model parameters due to sampling error.
+#'   If an integer is supplied, the number of boostraps to run.
+#'   For more information see:
+#'   \url{http://docs.zeligproject.org/articles/bootstraps.html}.
+#' }
+#' @return Depending on the class of model selected, \code{zelig} will return
+#'   an object with elements including \code{coefficients}, \code{residuals},
+#'   and \code{formula} which may be summarized using
+#'   \code{summary(z.out)} or individually extracted using, for example,
+#'   \code{coef(z.out)}. See
+#'   \url{http://docs.zeligproject.org/articles/getters.html} for a list of
+#'   functions to extract model components. You can also extract whole fitted
+#'   model objects using \code{\link{from_zelig_model}}.
+#'
+#'
+#'
+#' @examples
+#' library(Zelig)
+#' library(dplyr) # for the pipe operator %>%
+#' # load and transform data
+#' data("CigarettesSW")
+#' CigarettesSW$rprice <- with(CigarettesSW, price/cpi)
+#' CigarettesSW$rincome <- with(CigarettesSW, income/population/cpi)
+#' CigarettesSW$tdiff <- with(CigarettesSW, (taxs - tax)/cpi)
+#' # log second stage independent variables, as logging internally for ivreg is
+#' # not currently supported
+#' CigarettesSW$log_rprice <- log(CigarettesSW$rprice)
+#' CigarettesSW$log_rincome <- log(CigarettesSW$rincome)
+#' z.out1 <- zelig(log(packs) ~ log_rprice + log_rincome |
+#' log_rincome + tdiff + I(tax/cpi),data = CigarettesSW, subset = year == "1995",model = "ivreg")
+#' summary(z.out1)
+#' @seealso Vignette: \url{http://docs.zeligproject.org/articles/zelig_ivreg.html}
+#' Fit instrumental-variable regression by two-stage least squares. This is
+#' equivalent to direct instrumental-variables estimation when the number of
+#' instruments is equal to the number of predictors.
+#'
+#' @param formula specification(s) of the regression relationship
+#' @param instruments the instruments. Either `instruments` is missing and
+#'   formula has three parts as in `y ~ x1 + x2 | z1 + z2 + z3` (recommended) or
+#'   formula is `y ~ x1 + x2` and instruments is a one-sided formula
+#' `~ z1 + z2 + z3`. Using `instruments` is not recommended with `zelig`.
+# @param an optional list. See the `contrasts.arg` of
+#   \code{\link{model.matrix.default}}.
+#' @param model,x,y logicals. If `TRUE` the corresponding components of the fit
+#' (the model frame, the model matrices , the response) are returned.
+#' @param ... further arguments passed to methods. See also \code{\link{zelig}}.
+#'
+#' @details Regressors and instruments for `ivreg` are most easily specified in
+#'   a formula with two parts on the right-hand side, e.g.,
+#'   `y ~ x1 + x2 | z1 + z2 + z3`, where `x1` and `x2` are the regressors and
+#'   `z1`, `z2`, and `z3` are the instruments. Note that exogenous regressors
+#'   have to be included as instruments for themselves. For example, if there is
+#'   one exogenous regressor `ex` and one endogenous regressor `en` with
+#'   instrument `in`, the appropriate formula would be `y ~ ex + en | ex + in`.
+#'   Equivalently, this can be specified as `y ~ ex + en | . - en + in`, i.e.,
+#'   by providing an update formula with a `.` in the second part of the
+#'   formula. The latter is typically more convenient, if there is a large
+#'   number of exogenous regressors.
+#'
+#' @examples
+#' library(Zelig)
+#' library(AER) # for sandwich vcov
+#' library(dplyr) # for the pipe operator %>%
+#'
+#' # load and transform data
+#' data("CigarettesSW")
+#' CigarettesSW$rprice <- with(CigarettesSW, price/cpi)
+#' CigarettesSW$rincome <- with(CigarettesSW, income/population/cpi)
+#' CigarettesSW$tdiff <- with(CigarettesSW, (taxs - tax)/cpi)
+#'
+#' # log second stage independent variables, as logging internally for ivreg is
+#' # not currently supported
+#' CigarettesSW$log_rprice <- log(CigarettesSW$rprice)
+#' CigarettesSW$log_rincome <- log(CigarettesSW$rincome)
+#'
+#' # estimate model
+#' z.out1 <- zelig(log(packs) ~ log_rprice + log_rincome |
+#'                     log_rincome + tdiff + I(tax/cpi),
+#'                     data = CigarettesSW,
+#'                     model = "ivreg")
+#' summary(z.out1)
+#'
+#' @source `ivreg` is from Christian Kleiber and Achim Zeileis (2008). Applied
+#' Econometrics with R. New York: Springer-Verlag. ISBN 978-0-387-77316-2. URL
+#' <https://CRAN.R-project.org/package=AER>
+#'
+#' @seealso \code{\link{zelig}},
+#' Greene, W. H. (1993) *Econometric Analysis*, 2nd ed., Macmillan.
+#'
+#' @md
+#' @import methods
+#' @export Zelig-ivreg
+#' @exportClass Zelig-ivreg
+#'
+#' @include model-zelig.R
+
+zivreg <- setRefClass("Zelig-ivreg", contains = "Zelig")
+
+zivreg$methods(
+    initialize = function() {
+        callSuper()
+        .self$name <- "ivreg"
+        .self$authors <- "Christopher Gandrud"
+        .self$packageauthors <- "Christian Kleiber and Achim Zeileis"
+        .self$year <- 2008
+        .self$description <- "Instrumental-Variable Regression"
+        .self$fn <- quote(AER::ivreg)
+        # JSON
+        .self$outcome <- "continous"
+        .self$wrapper <- "ivreg"
+        .self$acceptweights <- TRUE
+    }
+)
+
+zivreg$methods(
+    zelig = function(formula, data, ..., weights = NULL, by = NULL,
+                     bootstrap = FALSE) {
+    .self$zelig.call <- match.call(expand.dots = TRUE)
+    .self$model.call <- .self$zelig.call
+    callSuper(formula = formula, data = data, ...,
+              weights = weights, by = by, bootstrap = bootstrap)
+
+    # Automated Background Test Statistics and Criteria
+    rse <- lapply(.self$zelig.out$z.out, (function(x) vcovHC(x, type = "HC0")))
+    rse.se <- sqrt(diag(rse[[1]]))                 # Needs to work with "by" argument
+    est.se <- sqrt(diag(.self$get_vcov()[[1]]))
+  }
+)
+
+zivreg$methods(
+    param = function(z.out, method = "mvn") {
+        if(identical(method,"mvn")){
+            return(list(simparam = mvrnorm(.self$num, coef(z.out), vcov(z.out)),
+                   simalpha = rep(summary(z.out)$sigma, .self$num) )  )
+        } else if(identical(method, "point")){
+            return(list(simparam = t(as.matrix(coef(z.out))),
+                        simalpha = summary(z.out)$sigma))
+        } else {
+            stop("param called with method argument of undefined type.")
+        }
+    }
+)
+
+zivreg$methods(
+    qi = function(simparam, mm) {
+        ev <- simparam$simparam %*% t(mm)
+        pv <- as.matrix(rnorm(n = length(ev), mean = ev,
+                              sd = simparam$simalpha), nrow = length(ev),
+                              ncol = 1)
+        return(list(ev = ev, pv = pv))
+    }
+)
+
+#zivreg$methods(
+#    mcfun = function(z, h, b0 = 0, b1 = 1, alpha = 1, sim = TRUE){
+#        x <- b0 + 2*z + 3*h + sim * rnorm(n = length(z), sd = alpha + 1)
+#        y <- b0 + b1*x + sim * rnorm(n = length(z), sd = alpha)
+#        yx <- list(y, x)
+#        return(yx)
+#    }
+#)
diff --git a/R/model-logit-bayes.R b/R/model-logit-bayes.R
new file mode 100644
index 0000000..c9e93c3
--- /dev/null
+++ b/R/model-logit-bayes.R
@@ -0,0 +1,120 @@
+#' Bayesian Logit Regression
+#'
+#' @param formula a symbolic representation of the model to be
+#'   estimated, in the form \code{y ~ x1 + x2}, where \code{y} is the
+#'   dependent variable and \code{x1} and \code{x2} are the explanatory
+#'   variables, and \code{y}, \code{x1}, and \code{x2} are contained in the
+#'   same dataset. (You may include more than two explanatory variables,
+#'   of course.) The \code{+} symbol means ``inclusion'' not
+#'   ``addition.'' You may also include interaction terms and main
+#'   effects in the form \code{x1*x2} without computing them in prior
+#'   steps; \code{I(x1*x2)} to include only the interaction term and
+#'   exclude the main effects; and quadratic terms in the form
+#'   \code{I(x1^2)}.
+#' @param model the name of a statistical model to estimate.
+#'   For a list of other supported models and their documentation see:
+#'   \url{http://docs.zeligproject.org/articles/}.
+#' @param data the name of a data frame containing the variables
+#'   referenced in the formula or a list of multiply imputed data frames
+#'   each having the same variable names and row numbers (created by
+#'   \code{Amelia} or \code{\link{to_zelig_mi}}).
+#' @param ... additional arguments passed to \code{zelig},
+#'   relevant for the model to be estimated.
+#' @param by a factor variable contained in \code{data}. If supplied,
+#'   \code{zelig} will subset
+#'   the data frame based on the levels in the \code{by} variable, and
+#'   estimate a model for each subset. This can save a considerable amount of
+#'   effort. You may also use \code{by} to run models using MatchIt
+#'   subclasses.
+#' @param cite If is set to 'TRUE' (default), the model citation will be printed
+#'   to the console.
+#'
+#' @examples
+#' data(turnout)
+#' z.out <- zelig(vote ~ race + educate, model = "logit.bayes",data = turnout, verbose = FALSE)
+#'
+#' @details
+#' Additional parameters avaialable to this model include:
+#' \itemize{
+#'   \item \code{weights}: vector of weight values or a name of a variable in the dataset
+#'   by which to weight the model. For more information see:
+#'   \url{http://docs.zeligproject.org/articles/weights.html}.
+#'   \item \code{burnin}: number of the initial MCMC iterations to be discarded (defaults to 1,000).
+#'   \item \code{mcmc}: number of the MCMC iterations after burnin (defaults to 10,000).
+#'   \item \code{thin}: thinning interval for the Markov chain. Only every thin-th draw from
+#'   the Markov chain is kept. The value of mcmc must be divisible by this value. The default
+#'   value is 1.
+#'   \item \code{verbose}: defaults to FALSE. If TRUE, the progress of the sampler (every 10\%)
+#'   is printed to the screen.
+#'   \item \code{seed}: seed for the random number generator. The default is \code{NA} which
+#'   corresponds to a random seed of 12345.
+#'   \item \code{beta.start}: starting values for the Markov chain, either a scalar or vector
+#'   with length equal to the number of estimated coefficients. The default is \code{NA}, such
+#'   that the maximum likelihood estimates are used as the starting values.
+#' }
+#' Use the following parameters to specify the model's priors:
+#' \itemize{
+#'     \item \code{b0}: prior mean for the coefficients, either a numeric vector or a
+#'     scalar. If a scalar value, that value will be the prior mean for all the
+#'     coefficients. The default is 0.
+#'     \item \code{B0}: prior precision parameter for the coefficients, either a
+#'     square matrix (with the dimensions equal to the number of the coefficients) or
+#'     a scalar. If a scalar value, that value times an identity matrix will be the
+#'     prior precision parameter. The default is 0, which leads to an improper prior.
+#' }
+#' Use the following arguments to specify optional output for the model:
+#' \itemize{
+#'     \item \code{bayes.resid}: defaults to FALSE. If TRUE, the latent
+#'     Bayesian residuals for all observations are returned. Alternatively,
+#'     users can specify a vector of observations for which the latent residuals should be returned.
+#' }
+#' @return Depending on the class of model selected, \code{zelig} will return
+#'   an object with elements including \code{coefficients}, \code{residuals},
+#'   and \code{formula} which may be summarized using
+#'   \code{summary(z.out)} or individually extracted using, for example,
+#'   \code{coef(z.out)}. See
+#'   \url{http://docs.zeligproject.org/articles/getters.html} for a list of
+#'   functions to extract model components. You can also extract whole fitted
+#'   model objects using \code{\link{from_zelig_model}}.
+#'
+#' @seealso Vignette: \url{http://docs.zeligproject.org/articles/zelig_logitbayes.html}
+#' @import methods
+#' @export Zelig-logit-bayes
+#' @exportClass Zelig-logit-bayes
+#'
+#' @include model-zelig.R
+#' @include model-bayes.R
+#' @include model-logit.R
+
+zlogitbayes <- setRefClass("Zelig-logit-bayes",
+                             contains = c("Zelig-bayes",
+                                          "Zelig-logit"))
+
+zlogitbayes$methods(
+  initialize = function() {
+    callSuper()
+    .self$name <- "logit-bayes"
+    .self$family <- "binomial"
+    .self$link <- "logit"
+    .self$linkinv <- eval(call(.self$family, .self$link))$linkinv
+    .self$year <- 2013
+    .self$category <- "dichotomous"
+    .self$authors <- "Ben Goodrich, Ying Lu"
+    .self$description = "Bayesian Logistic Regression for Dichotomous Dependent Variables"
+    .self$fn <- quote(MCMCpack::MCMClogit)
+    # JSON from parent
+    .self$wrapper <- "logit.bayes"
+  }
+)
+
+zlogitbayes$methods(
+  mcfun = function(x, b0 = 0, b1 = 1, ..., sim = TRUE){
+    mu <- 1/(1 + exp(-b0 - b1 * x))
+    if(sim) {
+        y <- rbinom(n = length(x), size = 1, prob = mu)
+        return(y)
+    } else {
+        return(mu)
+    }
+  }
+)
diff --git a/R/model-logit-gee.R b/R/model-logit-gee.R
new file mode 100755
index 0000000..14e5de1
--- /dev/null
+++ b/R/model-logit-gee.R
@@ -0,0 +1,90 @@
+#' Generalized Estimating Equation for Logit Regression
+#' @param formula a symbolic representation of the model to be
+#'   estimated, in the form \code{y ~ x1 + x2}, where \code{y} is the
+#'   dependent variable and \code{x1} and \code{x2} are the explanatory
+#'   variables, and \code{y}, \code{x1}, and \code{x2} are contained in the
+#'   same dataset. (You may include more than two explanatory variables,
+#'   of course.) The \code{+} symbol means ``inclusion'' not
+#'   ``addition.'' You may also include interaction terms and main
+#'   effects in the form \code{x1*x2} without computing them in prior
+#'   steps; \code{I(x1*x2)} to include only the interaction term and
+#'   exclude the main effects; and quadratic terms in the form
+#'   \code{I(x1^2)}.
+#' @param model the name of a statistical model to estimate.
+#'   For a list of other supported models and their documentation see:
+#'   \url{http://docs.zeligproject.org/articles/}.
+#' @param data the name of a data frame containing the variables
+#'   referenced in the formula or a list of multiply imputed data frames
+#'   each having the same variable names and row numbers (created by
+#'   \code{Amelia} or \code{\link{to_zelig_mi}}).
+#' @param ... additional arguments passed to \code{zelig},
+#'   relevant for the model to be estimated.
+#' @param by a factor variable contained in \code{data}. If supplied,
+#'   \code{zelig} will subset
+#'   the data frame based on the levels in the \code{by} variable, and
+#'   estimate a model for each subset. This can save a considerable amount of
+#'   effort. You may also use \code{by} to run models using MatchIt
+#'   subclasses.
+#' @param cite If is set to 'TRUE' (default), the model citation will be printed
+#'   to the console.
+#' @param id: where id is a variable which identifies the clusters. The data should be sorted
+#' by \code{id} and should be ordered within each cluster when appropriate
+#' @param corstr: character string specifying the correlation structure:
+#'  "independence", "exchangeable", "ar1", "unstructured" and "userdefined"
+#' @param geeglm: See geeglm in package geepack for other function arguments
+#' @details
+#' Additional parameters avaialable to this model include:
+#' \itemize{
+#'   \item \code{weights}: vector of weight values or a name of a variable in the dataset
+#'   by which to weight the model. For more information see:
+#'   \url{http://docs.zeligproject.org/articles/weights.html}.
+#'   \item \code{bootstrap}: logical or numeric. If \code{FALSE} don't use bootstraps to
+#'   robustly estimate uncertainty around model parameters due to sampling error.
+#'   If an integer is supplied, the number of boostraps to run.
+#'   For more information see:
+#'   \url{http://docs.zeligproject.org/articles/bootstraps.html}.
+#' }
+#' @return Depending on the class of model selected, \code{zelig} will return
+#'   an object with elements including \code{coefficients}, \code{residuals},
+#'   and \code{formula} which may be summarized using
+#'   \code{summary(z.out)} or individually extracted using, for example,
+#'   \code{coef(z.out)}. See
+#'   \url{http://docs.zeligproject.org/articles/getters.html} for a list of
+#'   functions to extract model components. You can also extract whole fitted
+#'   model objects using \code{\link{from_zelig_model}}.
+#'
+#'@examples
+#'
+#' data(turnout)
+#' turnout$cluster <- rep(c(1:200), 10)
+#' sorted.turnout <- turnout[order(turnout$cluster),]
+#'
+#' z.out1 <- zelig(vote ~ race + educate, model = "logit.gee",
+#' id = "cluster", data = sorted.turnout)
+#'
+#' summary(z.out1)
+#' x.out1 <- setx(z.out1)
+#' s.out1 <- sim(z.out1, x = x.out1)
+#' summary(s.out1)
+#' plot(s.out1)
+#'
+#' @seealso Vignette: \url{http://docs.zeligproject.org/articles/zelig_logitgee.html}
+#' @import methods
+#' @export Zelig-logit-gee
+#' @exportClass Zelig-logit-gee
+#'
+#' @include model-zelig.R
+#' @include model-binchoice-gee.R
+
+zlogitgee <- setRefClass("Zelig-logit-gee",
+                           contains = c("Zelig-binchoice-gee"))
+
+zlogitgee$methods(
+  initialize = function() {
+    callSuper()
+    .self$name <- "logit-gee"
+    .self$link <- "logit"
+    .self$description <- "General Estimating Equation for Logistic Regression"
+    .self$wrapper <- "logit.gee"
+  }
+)
diff --git a/R/model-logit-survey.R b/R/model-logit-survey.R
new file mode 100755
index 0000000..391e6e0
--- /dev/null
+++ b/R/model-logit-survey.R
@@ -0,0 +1,105 @@
+#' Logit Regression with Survey Weights
+#'@param formula a symbolic representation of the model to be
+#'   estimated, in the form \code{y ~ x1 + x2}, where \code{y} is the
+#'   dependent variable and \code{x1} and \code{x2} are the explanatory
+#'   variables, and \code{y}, \code{x1}, and \code{x2} are contained in the
+#'   same dataset. (You may include more than two explanatory variables,
+#'   of course.) The \code{+} symbol means ``inclusion'' not
+#'   ``addition.'' You may also include interaction terms and main
+#'   effects in the form \code{x1*x2} without computing them in prior
+#'   steps; \code{I(x1*x2)} to include only the interaction term and
+#'   exclude the main effects; and quadratic terms in the form
+#'   \code{I(x1^2)}.
+#'@param model the name of a statistical model to estimate.
+#'   For a list of other supported models and their documentation see:
+#'   \url{http://docs.zeligproject.org/articles/}.
+#'@param data the name of a data frame containing the variables
+#'   referenced in the formula or a list of multiply imputed data frames
+#'   each having the same variable names and row numbers (created by
+#'   \code{Amelia} or \code{\link{to_zelig_mi}}).
+#'@param ... additional arguments passed to \code{zelig},
+#'   relevant for the model to be estimated.
+#'@param by a factor variable contained in \code{data}. If supplied,
+#'   \code{zelig} will subset
+#'   the data frame based on the levels in the \code{by} variable, and
+#'   estimate a model for each subset. This can save a considerable amount of
+#'   effort. You may also use \code{by} to run models using MatchIt
+#'   subclasses.
+#'@param cite If is set to 'TRUE' (default), the model citation will be printed
+#'   to the console.
+#'
+#' @details
+#' Additional parameters avaialable to this model include:
+#' \itemize{
+#'   \item weights: vector of weight values or a name of a variable in the dataset
+#'   by which to weight the model. For more information see:
+#'   \url{http://docs.zeligproject.org/articles/weights.html}.
+#'   \item bootstrap: logical or numeric. If \code{FALSE} don't use bootstraps to
+#'   robustly estimate uncertainty around model parameters due to sampling error.
+#'   If an integer is supplied, the number of boostraps to run.
+#'   For more information see:
+#'   \url{http://docs.zeligproject.org/articles/bootstraps.html}.
+#' }
+#'
+#' @return Depending on the class of model selected, \code{zelig} will return
+#'   an object with elements including \code{coefficients}, \code{residuals},
+#'   and \code{formula} which may be summarized using
+#'   \code{summary(z.out)} or individually extracted using, for example,
+#'   \code{coef(z.out)}. See
+#'   \url{http://docs.zeligproject.org/articles/getters.html} for a list of
+#'   functions to extract model components. You can also extract whole fitted
+#'   model objects using \code{\link{from_zelig_model}}.
+#'
+#'@param below (defaults to 0) The point at which the dependent variable is censored from below. If any values in the dependent variable are observed to be less than the censoring point, it is assumed that that particular observation is censored from below at the observed value. (See for a Bayesian implementation that supports both left and right censoring.)
+#'@param robust defaults to FALSE. If TRUE, zelig() computes robust standard errors based on sandwich estimators (see and ) and the options selected in cluster.
+#'@param if robust = TRUE, you may select a variable to define groups of correlated observations. Let x3 be a variable that consists of either discrete numeric values, character strings, or factors that define strata. Then
+#' z.out <- zelig(y ~ x1 + x2, robust = TRUE, cluster = "x3", model = "tobit", data = mydata)
+#' means that the observations can be correlated within the strata defined by the variable x3, and that robust standard errors should be calculated according to those clusters. If robust = TRUE but cluster is not specified, zelig() assumes that each observation falls into its own cluster.
+#'
+#'@examples
+#'
+#' data(api, package = "survey")
+#' apistrat$yr.rnd.numeric <- as.numeric(apistrat$yr.rnd == "Yes")
+#' z.out1 <- zelig(yr.rnd.numeric ~ meals + mobility, model = "logit.survey",
+#'                weights = apistrat$pw, data = apistrat)
+#'
+#' summary(z.out1)
+#' x.low <- setx(z.out1, meals= quantile(apistrat$meals, 0.2))
+#' x.high <- setx(z.out1, meals= quantile(apistrat$meals, 0.8))
+#' s.out1 <- sim(z.out1, x = x.low, x1 = x.high)
+#' summary(s.out1)
+#' plot(s.out1)
+#'
+#' @seealso Vignette: \url{http://docs.zeligproject.org/articles/zelig_logitsurvey.html}
+#' @import methods
+#' @export Zelig-logit-survey
+#' @exportClass Zelig-logit-survey
+#'
+#' @include model-zelig.R
+#' @include model-binchoice-survey.R
+
+zlogitsurvey <- setRefClass("Zelig-logit-survey",
+                           contains = c("Zelig-binchoice-survey"))
+
+zlogitsurvey$methods(
+  initialize = function() {
+    callSuper()
+    .self$name <- "logit-survey"
+    .self$link <- "logit"
+    .self$description <- "Logistic Regression with Survey Weights"
+    .self$wrapper <- "logit.survey"
+  }
+)
+
+
+zlogitsurvey$methods(
+  mcfun = function(x, b0=0, b1=1, ..., sim=TRUE){
+    mu <- 1/(1 + exp(-b0 - b1 * x))
+    if(sim){
+        y <- rbinom(n=length(x), size=1, prob=mu)
+        return(y)
+    }else{
+        return(mu)
+    }
+  }
+)
diff --git a/R/model-logit.R b/R/model-logit.R
new file mode 100755
index 0000000..0525e5c
--- /dev/null
+++ b/R/model-logit.R
@@ -0,0 +1,122 @@
+#' Logistic Regression for Dichotomous Dependent Variables
+#'@param formula a symbolic representation of the model to be
+#'   estimated, in the form \code{y ~ x1 + x2}, where \code{y} is the
+#'   dependent variable and \code{x1} and \code{x2} are the explanatory
+#'   variables, and \code{y}, \code{x1}, and \code{x2} are contained in the
+#'   same dataset. (You may include more than two explanatory variables,
+#'   of course.) The \code{+} symbol means ``inclusion'' not
+#'   ``addition.'' You may also include interaction terms and main
+#'   effects in the form \code{x1*x2} without computing them in prior
+#'   steps; \code{I(x1*x2)} to include only the interaction term and
+#'   exclude the main effects; and quadratic terms in the form
+#'   \code{I(x1^2)}.
+#'@param model the name of a statistical model to estimate.
+#'   For a list of other supported models and their documentation see:
+#'   \url{http://docs.zeligproject.org/articles/}.
+#'@param data the name of a data frame containing the variables
+#'   referenced in the formula or a list of multiply imputed data frames
+#'   each having the same variable names and row numbers (created by
+#'   \code{Amelia} or \code{\link{to_zelig_mi}}).
+#'@param ... additional arguments passed to \code{zelig},
+#'   relevant for the model to be estimated.
+#'@param by a factor variable contained in \code{data}. If supplied,
+#'   \code{zelig} will subset
+#'   the data frame based on the levels in the \code{by} variable, and
+#'   estimate a model for each subset. This can save a considerable amount of
+#'   effort. You may also use \code{by} to run models using MatchIt
+#'   subclasses.
+#'@param cite If is set to 'TRUE' (default), the model citation will be printed
+#'   to the console.
+#'
+#' @details
+#' Additional parameters avaialable to this model include:
+#' \itemize{
+#'   \item weights: vector of weight values or a name of a variable in the dataset
+#'   by which to weight the model. For more information see:
+#'   \url{http://docs.zeligproject.org/articles/weights.html}.
+#'   \item bootstrap: logical or numeric. If \code{FALSE} don't use bootstraps to
+#'   robustly estimate uncertainty around model parameters due to sampling error.
+#'   If an integer is supplied, the number of boostraps to run.
+#'   For more information see:
+#'   \url{http://docs.zeligproject.org/articles/bootstraps.html}.
+#' }
+#'
+#' @return Depending on the class of model selected, \code{zelig} will return
+#'   an object with elements including \code{coefficients}, \code{residuals},
+#'   and \code{formula} which may be summarized using
+#'   \code{summary(z.out)} or individually extracted using, for example,
+#'   \code{coef(z.out)}. See
+#'   \url{http://docs.zeligproject.org/articles/getters.html} for a list of
+#'   functions to extract model components. You can also extract whole fitted
+#'   model objects using \code{\link{from_zelig_model}}.
+#'
+#'@param below (defaults to 0) The point at which the dependent variable is censored from below. If any values in the dependent variable are observed to be less than the censoring point, it is assumed that that particular observation is censored from below at the observed value. (See for a Bayesian implementation that supports both left and right censoring.)
+#'@param robust defaults to FALSE. If TRUE, zelig() computes robust standard errors based on sandwich estimators (see and ) and the options selected in cluster.
+#'@param if robust = TRUE, you may select a variable to define groups of correlated observations. Let x3 be a variable that consists of either discrete numeric values, character strings, or factors that define strata. Then
+#'z.out <- zelig(y ~ x1 + x2, robust = TRUE, cluster = "x3", model = "tobit", data = mydata)
+#'means that the observations can be correlated within the strata defined by the variable x3, and that robust standard errors should be calculated according to those clusters. If robust = TRUE but cluster is not specified, zelig() assumes that each observation falls into its own cluster.
+#'
+#'@examples
+#' library(Zelig)
+#' data(turnout)
+#' z.out1 <- zelig(vote ~ age + race, model = "logit", data = turnout,
+#'                 cite = FALSE)
+#' summary(z.out1)
+#' summary(z.out1, odds_ratios = TRUE)
+#' x.out1 <- setx(z.out1, age = 36, race = "white")
+#' s.out1 <- sim(z.out1, x = x.out1)
+#' summary(s.out1)
+#' plot(s.out1)
+#'
+#' @seealso Vignette: \url{http://docs.zeligproject.org/articles/zelig_logit.html}
+#' @import methods
+#' @export Zelig-logit
+#' @exportClass Zelig-logit
+#'
+#' @include model-zelig.R
+#' @include model-gee.R
+#' @include model-gamma.R
+#' @include model-zelig.R
+#' @include model-glm.R
+#' @include model-binchoice.R
+
+zlogit <- setRefClass("Zelig-logit",
+                      contains = "Zelig-binchoice")
+
+zlogit$methods(initialize = function() {
+    callSuper()
+    .self$name <- "logit"
+    .self$link <- "logit"
+    .self$description = "Logistic Regression for Dichotomous Dependent Variables"
+    .self$packageauthors <- "R Core Team"
+    .self$wrapper <- "logit"
+})
+
+zlogit$methods(mcfun = function(x, b0 = 0, b1 = 1, ..., sim = TRUE) {
+    mu <- 1/(1 + exp(-b0 - b1 * x))
+    if (sim) {
+        y <- rbinom(n = length(x), size = 1, prob = mu)
+        return(y)
+    } else {
+        return(mu)
+    }
+  }
+)
+
+zlogit$methods(
+    show = function(odds_ratios = FALSE, ...) {
+    if (odds_ratios & !.self$mi & !.self$bootstrap) {
+        summ <- .self$zelig.out %>%
+            do(summ = {cat("Model: \n")
+                ## Replace coefficients with odds-ratios
+                .z.out.summary <- base::summary(.$z.out)
+                .z.out.summary <- or_summary(.z.out.summary)
+                print(.z.out.summary)
+            })
+    }
+    else {
+        callSuper(...)
+    }
+        #print(base::summary(.self$zelig.out))
+    }
+)
diff --git a/R/model-lognorm.R b/R/model-lognorm.R
new file mode 100755
index 0000000..57eca18
--- /dev/null
+++ b/R/model-lognorm.R
@@ -0,0 +1,178 @@
+#' Log-Normal Regression for Duration Dependent Variables
+#'@param formula a symbolic representation of the model to be
+#'   estimated, in the form \code{y ~ x1 + x2}, where \code{y} is the
+#'   dependent variable and \code{x1} and \code{x2} are the explanatory
+#'   variables, and \code{y}, \code{x1}, and \code{x2} are contained in the
+#'   same dataset. (You may include more than two explanatory variables,
+#'   of course.) The \code{+} symbol means ``inclusion'' not
+#'   ``addition.'' You may also include interaction terms and main
+#'   effects in the form \code{x1*x2} without computing them in prior
+#'   steps; \code{I(x1*x2)} to include only the interaction term and
+#'   exclude the main effects; and quadratic terms in the form
+#'   \code{I(x1^2)}.
+#' @param model the name of a statistical model to estimate.
+#'   For a list of other supported models and their documentation see:
+#'   \url{http://docs.zeligproject.org/articles/}.
+#' @param data the name of a data frame containing the variables
+#'   referenced in the formula or a list of multiply imputed data frames
+#'   each having the same variable names and row numbers (created by
+#'   \code{Amelia} or \code{\link{to_zelig_mi}}).
+#' @param ... additional arguments passed to \code{zelig},
+#'   relevant for the model to be estimated.
+#' @param by a factor variable contained in \code{data}. If supplied,
+#'   \code{zelig} will subset
+#'   the data frame based on the levels in the \code{by} variable, and
+#'   estimate a model for each subset. This can save a considerable amount of
+#'   effort. You may also use \code{by} to run models using MatchIt
+#'   subclasses.
+#' @param cite If is set to 'TRUE' (default), the model citation will be printed
+#'   to the console.
+#' @param robust defaults to FALSE. If TRUE, zelig() computes robust standard errors based
+#' on sandwich estimators (see and ) based on the options in cluster.
+#' @param cluster  if robust = TRUE, you may select a variable to define groups of correlated
+#' observations. Let x3 be a variable that consists of either discrete numeric values, character
+#' strings, or factors that define strata. Then
+#   'z.out <- zelig(y ~ x1 + x2, robust = TRUE, cluster = "x3", model = "exp", data = mydata)
+#'  means that the observations can be correlated within the strata defined by the variable x3,
+#'  and that robust standard errors should be calculated according to those clusters.
+#'  If robust = TRUE but cluster is not specified, zelig() assumes that each observation falls
+#'  into its own cluster.
+#'
+#'
+#' @details
+#' Additional parameters avaialable to many models include:
+#' \itemize{
+#'   \item weights: vector of weight values or a name of a variable in the dataset
+#'   by which to weight the model. For more information see:
+#'   \url{http://docs.zeligproject.org/articles/weights.html}.
+#'   \item bootstrap: logical or numeric. If \code{FALSE} don't use bootstraps to
+#'   robustly estimate uncertainty around model parameters due to sampling error.
+#'   If an integer is supplied, the number of boostraps to run.
+#'   For more information see:
+#'   \url{http://docs.zeligproject.org/articles/bootstraps.html}.
+#' }
+#' @return Depending on the class of model selected, \code{zelig} will return
+#'   an object with elements including \code{coefficients}, \code{residuals},
+#'   and \code{formula} which may be summarized using
+#'   \code{summary(z.out)} or individually extracted using, for example,
+#'   \code{coef(z.out)}. See
+#'   \url{http://docs.zeligproject.org/articles/getters.html} for a list of
+#'   functions to extract model components. You can also extract whole fitted
+#'   model objects using \code{\link{from_zelig_model}}.
+#'
+#'@examples
+#' library(Zelig)
+#' data(coalition)
+#' z.out <- zelig(Surv(duration, ciep12) ~ fract + numst2, model ="lognorm",  data = coalition)
+#' summary(z.out)
+#'
+#' @seealso Vignette: \url{http://docs.zeligproject.org/articles/zelig_lognorm.html}
+#' @import methods
+#' @export Zelig-lognorm
+#' @exportClass Zelig-lognorm
+#'
+#' @include model-zelig.R
+
+zlognorm <- setRefClass("Zelig-lognorm",
+                        contains ="Zelig",
+                        fields = list(linkinv = "function"))
+
+zlognorm$methods(
+  initialize = function() {
+    callSuper()
+    .self$name <- "lognorm"
+    .self$authors <- "Matthew Owen, Olivia Lau, Kosuke Imai, Gary King"
+    .self$packageauthors <- "Terry M Therneau, and Thomas Lumley"
+    .self$year <- 2007
+    .self$description <- "Log-Normal Regression for Duration Dependent Variables"
+    .self$fn <- quote(survival::survreg)
+    .self$linkinv <- survreg.distributions[["lognormal"]]$itrans
+    # JSON
+    .self$outcome <- "discrete"
+    .self$wrapper <- "lognorm"
+    .self$acceptweights <- TRUE
+  }
+)
+
+zlognorm$methods(
+  zelig = function(formula, ..., robust = FALSE, cluster = NULL, data, weights = NULL, by = NULL, bootstrap = FALSE) {
+
+    localFormula <- formula # avoids CRAN warning about deep assignment from formula existing separately as argument and field
+    .self$zelig.call <- match.call(expand.dots = TRUE)
+    .self$model.call <- .self$zelig.call
+    if (!(is.null(cluster) || robust))
+      stop("If cluster is specified, then `robust` must be TRUE")
+    # Add cluster term
+    if (robust || !is.null(cluster))
+      localFormula <- cluster.formula(localFormula, cluster)
+    .self$model.call$dist <- "lognormal"
+    .self$model.call$model <- FALSE
+    callSuper(formula = localFormula, data = data, ..., robust = robust,
+              cluster = cluster, weights = weights, by = by, bootstrap = bootstrap)
+
+    if(!robust){
+      fn2 <- function(fc, data) {
+        fc$data <- data
+        return(fc)
+      }
+      robust.model.call <- .self$model.call
+      robust.model.call$robust <- TRUE
+
+      robust.zelig.out <- .self$data %>%
+      group_by_(.self$by) %>%
+      do(z.out = eval(fn2(robust.model.call, quote(as.data.frame(.))))$var )
+
+      .self$test.statistics<- list(robust.se = robust.zelig.out$z.out)
+    }
+  }
+)
+
+zlognorm$methods(
+  param = function(z.out, method="mvn") {
+    if(identical(method,"mvn")){
+      coeff <- coef(z.out)
+      mu <- c(coeff, log(z.out$scale))
+      cov <- vcov(z.out)
+      simulations <- mvrnorm(.self$num, mu = mu, Sigma = cov)
+      simparam.local <- as.matrix(simulations[, 1:length(coeff)])
+      simalpha <- as.matrix(simulations[, -(1:length(coeff))])
+      simparam.local <- list(simparam = simparam.local, simalpha = simalpha)
+      return(simparam.local)
+    } else if(identical(method,"point")){
+      return(list(simparam = t(as.matrix(coef(z.out))), simalpha = log(z.out$scale) ))
+    }
+  }
+)
+
+zlognorm$methods(
+  qi = function(simparam, mm) {
+    alpha <- simparam$simalpha
+    beta <- simparam$simparam
+    coeff <- simparam$simparam
+    eta <- coeff %*% t(mm)
+    theta <- as.matrix(apply(eta, 2, linkinv))
+    ev <- exp(log(theta) + 0.5 * (exp(alpha))^2)
+    pv <- as.matrix(rlnorm(n=length(ev), meanlog=log(theta), sdlog=exp(alpha)), nrow=length(ev), ncol=1)
+    dimnames(ev) <- dimnames(theta)
+    return(list(ev = ev, pv = pv))
+  }
+)
+
+zlognorm$methods(
+  mcfun = function(x, b0=0, b1=1, alpha=1, sim=TRUE){
+    .self$mcformula <- as.Formula("Surv(y.sim, event) ~ x.sim")
+
+    mu <- b0 + b1 * x
+    event <- rep(1, length(x))
+    y.sim <- rlnorm(n=length(x), meanlog=mu, sdlog=alpha)
+    y.hat <- exp(mu + 0.5*alpha^2)
+
+    if(sim){
+        mydata <- data.frame(y.sim=y.sim, event=event, x.sim=x)
+        return(mydata)
+    }else{
+        mydata <- data.frame(y.hat=y.hat, event=event, x.seq=x)
+        return(mydata)
+    }
+  }
+)
diff --git a/R/model-ls.R b/R/model-ls.R
new file mode 100755
index 0000000..c0a5bfa
--- /dev/null
+++ b/R/model-ls.R
@@ -0,0 +1,229 @@
+#' Least Squares Regression for Continuous Dependent Variables
+#'@param formula a symbolic representation of the model to be
+#'   estimated, in the form \code{y ~ x1 + x2}, where \code{y} is the
+#'   dependent variable and \code{x1} and \code{x2} are the explanatory
+#'   variables, and \code{y}, \code{x1}, and \code{x2} are contained in the
+#'   same dataset. (You may include more than two explanatory variables,
+#'   of course.) The \code{+} symbol means ``inclusion'' not
+#'   ``addition.'' You may also include interaction terms and main
+#'   effects in the form \code{x1*x2} without computing them in prior
+#'   steps; \code{I(x1*x2)} to include only the interaction term and
+#'   exclude the main effects; and quadratic terms in the form
+#'   \code{I(x1^2)}.
+#'@param model the name of a statistical model to estimate.
+#'   For a list of other supported models and their documentation see:
+#'   \url{http://docs.zeligproject.org/articles/}.
+#'@param data the name of a data frame containing the variables
+#'   referenced in the formula or a list of multiply imputed data frames
+#'   each having the same variable names and row numbers (created by
+#'   \code{Amelia} or \code{\link{to_zelig_mi}}).
+#'@param ... additional arguments passed to \code{zelig},
+#'   relevant for the model to be estimated.
+#'@param by a factor variable contained in \code{data}. If supplied,
+#'   \code{zelig} will subset
+#'   the data frame based on the levels in the \code{by} variable, and
+#'   estimate a model for each subset. This can save a considerable amount of
+#'   effort. You may also use \code{by} to run models using MatchIt
+#'   subclasses.
+#'@param cite If is set to 'TRUE' (default), the model citation will be printed
+#'   to the console.
+#'
+#'@details
+#' Additional parameters avaialable to this model include:
+#' \itemize{
+#'   \item \code{weights}: vector of weight values or a name of a variable in the dataset
+#'   by which to weight the model. For more information see:
+#'   \url{http://docs.zeligproject.org/articles/weights.html}.
+#'   \item \code{bootstrap}: logical or numeric. If \code{FALSE} don't use bootstraps to
+#'   robustly estimate uncertainty around model parameters due to sampling error.
+#'   If an integer is supplied, the number of boostraps to run.
+#'   For more information see:
+#'   \url{http://docs.zeligproject.org/articles/bootstraps.html}.
+#' }
+#' @return Depending on the class of model selected, \code{zelig} will return
+#'   an object with elements including \code{coefficients}, \code{residuals},
+#'   and \code{formula} which may be summarized using
+#'   \code{summary(z.out)} or individually extracted using, for example,
+#'   \code{coef(z.out)}. See
+#'   \url{http://docs.zeligproject.org/articles/getters.html} for a list of
+#'   functions to extract model components. You can also extract whole fitted
+#'   model objects using \code{\link{from_zelig_model}}.
+#'@examples
+#' library(Zelig)
+#' data(macro)
+#' z.out1 <- zelig(unem ~ gdp + capmob + trade, model = "ls", data = macro,
+#' cite = FALSE)
+#' summary(z.out1)
+#'
+#' @seealso Vignette: \url{http://docs.zeligproject.org/articles/zelig_ls.html}
+#' @import methods
+#' @export Zelig-ls
+#' @exportClass Zelig-ls
+#'
+#' @include model-zelig.R
+
+zls <- setRefClass("Zelig-ls", contains = "Zelig")
+
+zls$methods(
+  initialize = function() {
+    callSuper()
+    .self$name <- "ls"
+    .self$year <- 2007
+    .self$category <- "continuous"
+    .self$description <- "Least Squares Regression for Continuous Dependent Variables"
+    .self$packageauthors <- "R Core Team"
+    .self$fn <- quote(stats::lm)
+    # JSON
+    .self$outcome <- "continous"
+    .self$wrapper <- "ls"
+    .self$acceptweights <- TRUE
+  }
+)
+
+zls$methods(
+  zelig = function(formula, data, ..., weights = NULL, by = NULL,
+      bootstrap = FALSE) {
+    .self$zelig.call <- match.call(expand.dots = TRUE)
+    .self$model.call <- .self$zelig.call
+    callSuper(formula = formula, data = data, ...,
+              weights = weights, by = by, bootstrap = bootstrap)
+
+    # Automated Background Test Statistics and Criteria
+    rse <- lapply(.self$zelig.out$z.out, (function(x) vcovHC(x, type = "HC0")))
+    rse.se <- sqrt(diag(rse[[1]]))                 # Needs to work with "by" argument
+    est.se <- sqrt(diag(.self$get_vcov()[[1]]))
+    quickGim <- any( est.se > 1.5*rse.se | rse.se > 1.5*est.se )
+    .self$test.statistics<- list(robust.se = rse, gim.criteria = quickGim)
+  }
+)
+
+zls$methods(
+  param = function(z.out, method="mvn") {
+    if(identical(method,"mvn")){
+      return(list(simparam = mvrnorm(.self$num, coef(z.out), vcov(z.out)),
+                  simalpha = rep( summary(z.out)$sigma, .self$num) )  )
+    } else if(identical(method,"point")){
+      return(list(simparam = t(as.matrix(coef(z.out))),
+             simalpha=summary(z.out)$sigma))
+    } else {
+      stop("param called with method argument of undefined type.")
+    }
+  }
+)
+
+zls$methods(
+  qi = function(simparam, mm) {
+    ev <- simparam$simparam %*% t(mm)
+    pv <- as.matrix(rnorm(n=length(ev), mean=ev, sd=simparam$simalpha),
+                    nrow=length(ev), ncol=1)
+    return(list(ev = ev, pv = pv))
+  }
+)
+
+zls$methods(
+  gim = function(B=50, B2=50) {
+    ll.normal.bsIM <- function(par,y,X,sigma){
+        beta <- par[1:length(X)]
+        sigma2 <- sigma
+        -1/2 * (sum(log(sigma2) + (y -(X%*%beta))^2/sigma2))
+    }
+
+    getVb<-function(Dboot){
+      Dbar <- matrix(apply(Dboot,2,mean),nrow=B, ncol=length(Dhat), byrow=TRUE)
+      Diff <- Dboot - Dbar
+      Vb <- (t(Diff) %*% Diff) / (nrow(Dboot)-1)
+      return(Vb)
+    }
+
+    getSigma<-function(lm.obj){
+      return(sum(lm.obj$residuals^2)/(nrow(model.matrix(lm.obj))-ncol(model.matrix(lm.obj))))
+    }
+
+    D.est<-function(formula,data){
+      lm1 <- lm(formula,data, y=TRUE)
+      mm <- model.matrix(lm1)
+      y <- lm1$y
+      sigma <- getSigma(lm1)
+
+      grad <- apply(cbind(y,mm),1,function(x) numericGradient(ll.normal.bsIM, lm1$coefficients, y=x[1], X=x[2:length(x)], sigma=sigma))
+      meat <- grad%*%t(grad)
+      bread <- -solve(vcov(lm1))
+      Dhat <- nrow(mm)^(-1/2)* as.vector(diag(meat + bread))
+      return(Dhat)
+    }
+
+    D.est.vb<-function(formula,data){
+        lm1 <- lm(formula,data, y=TRUE)
+        mm <- model.matrix(lm1)
+        y <- lm1$y
+        sigma <- getSigma(lm1)
+
+        grad <- apply(cbind(y,mm),1,function(x) numericGradient(ll.normal.bsIM, lm1$coefficients, y=x[1], X=x[2:length(x)], sigma=sigma))
+        meat <- grad%*%t(grad)
+        bread <- -solve(vcov(lm1))
+        Dhat <- nrow(mm)^(-1/2)* as.vector(diag(meat + bread))
+
+        muB<-lm1$fitted.values
+        DB <- matrix(NA, nrow=B2, ncol=length(Dhat))
+
+        for(j in 1:B2){
+          yB2 <- rnorm(nrow(data), muB, sqrt(sigma))
+          lm1B2 <- lm(yB2 ~ mm-1)
+          sigmaB2 <- getSigma(lm1B2)
+
+          grad <- apply(cbind(yB2,model.matrix(lm1B2)),1,function(x) numericGradient(ll.normal.bsIM, lm1B2$coefficients, y=x[1], X=x[2:length(x)], sigma=sigmaB2))
+          meat <- grad%*%t(grad)
+          bread <- -solve(vcov(lm1B2))
+          DB[j,] <- nrow(mm)^(-1/2)*diag((meat + bread))
+        }
+        Vb <- getVb(DB)
+        T<- t(Dhat)%*%solve(Vb)%*%Dhat
+
+        return(list(Dhat=Dhat,T=T))
+    }
+
+    Dhat <- D.est(formula=.self$formula, data=.self$data)
+    lm1 <- lm(formula=.self$formula, data=.self$data)
+    mu <- lm1$fitted.values
+    sigma <- getSigma(lm1)
+    n <- length(mu)
+    yname <- all.vars(.self$formula[[2]])
+
+    Dboot <- matrix(NA, nrow=B, ncol=length(Dhat))
+    bootdata<-data
+    for(i in 1:B){
+        yB <- rnorm(n, mu, sqrt(sigma))
+        bootdata[yname] <- yB
+        result <- D.est.vb(formula=.self$formula, data=bootdata)
+        Dboot[i,] <- result$Dhat
+        T[i] <- result$T
+    }
+
+    Vb <- getVb(Dboot)
+    omega <- t(Dhat) %*% solve(Vb) %*% Dhat
+    pb = (B+1-sum(T< as.numeric(omega)))/(B+1)
+
+    .self$test.statistics$gim <- list(stat=omega, pval=pb)
+
+    # When method used, add to references
+    gimreference <- bibentry(
+        bibtype="Article",
+        title = "How Robust Standard Errors Expose Methodological Problems They Do Not Fix, and What to Do About It",
+        author = c(
+        person("Gary", "King"),
+        person("Margret E.", "Roberts")
+        ),
+        journal = "Political Analysis",
+        year = 2014,
+        pages = "1-21",
+        url =  "http://j.mp/InK5jU")
+    .self$refs <- c(.self$refs, gimreference)
+  }
+)
+
+zls$methods(
+  mcfun = function(x, b0=0, b1=1, alpha=1, sim=TRUE){
+    y <- b0 + b1*x + sim * rnorm(n=length(x), sd=alpha)
+    return(y)
+  }
+)
diff --git a/R/model-ma.R b/R/model-ma.R
new file mode 100755
index 0000000..7581a03
--- /dev/null
+++ b/R/model-ma.R
@@ -0,0 +1,88 @@
+#' Time-Series Model with Moving Average
+#'
+#' Warning: \code{summary} does not work with timeseries models after
+#' simulation.
+#'
+#' @param formula a symbolic representation of the model to be
+#'   estimated, in the form \code{y ~ x1 + x2}, where \code{y} is the
+#'   dependent variable and \code{x1} and \code{x2} are the explanatory
+#'   variables, and \code{y}, \code{x1}, and \code{x2} are contained in the
+#'   same dataset. (You may include more than two explanatory variables,
+#'   of course.) The \code{+} symbol means ``inclusion'' not
+#'   ``addition.'' You may also include interaction terms and main
+#'   effects in the form \code{x1*x2} without computing them in prior
+#'   steps; \code{I(x1*x2)} to include only the interaction term and
+#'   exclude the main effects; and quadratic terms in the form
+#'   \code{I(x1^2)}.
+#' @param model the name of a statistical model to estimate.
+#'   For a list of other supported models and their documentation see:
+#'   \url{http://docs.zeligproject.org/articles/}.
+#' @param data the name of a data frame containing the variables
+#'   referenced in the formula or a list of multiply imputed data frames
+#'   each having the same variable names and row numbers (created by
+#'   \code{Amelia} or \code{\link{to_zelig_mi}}).
+#' @param ... additional arguments passed to \code{zelig},
+#'   relevant for the model to be estimated.
+#' @param by a factor variable contained in \code{data}. If supplied,
+#'   \code{zelig} will subset
+#'   the data frame based on the levels in the \code{by} variable, and
+#'   estimate a model for each subset. This can save a considerable amount of
+#'   effort. You may also use \code{by} to run models using MatchIt
+#'   subclasses.
+#' @param cite If is set to 'TRUE' (default), the model citation will be printed
+#'   to the console.
+#' @param ts The name of the variable containing the time indicator. This should be passed in as
+#'     a string. If this variable is not provided, Zelig will assume that the data is already
+#'     ordered by time.
+#' @param cs Name of a variable that denotes the cross-sectional element of the data, for example,
+#'  country name in a dataset with time-series across different countries. As a variable name,
+#'  this should be in quotes. If this is not provided, Zelig will assume that all observations
+#'  come from the same unit over time, and should be pooled, but if provided, individual models will
+#'  be run in each cross-section.
+#'  If \code{cs} is given as an argument, \code{ts} must also be provided. Additionally, \code{by}
+#'  must be \code{NULL}.
+#' @param order A vector of length 3 passed in as \code{c(p,d,q)} where p represents the order of the
+#'     autoregressive model, d represents the number of differences taken in the model, and q represents
+#'     the order of the moving average model.
+#' @details
+#' Currently only the Reference class syntax for time series. This model does not accept
+#' Bootstraps or weights.
+#'
+#'
+#' @return Depending on the class of model selected, \code{zelig} will return
+#'   an object with elements including \code{coefficients}, \code{residuals},
+#'   and \code{formula} which may be summarized using
+#'   \code{summary(z.out)} or individually extracted using, for example,
+#'   \code{coef(z.out)}. See
+#'   \url{http://docs.zeligproject.org/articles/getters.html} for a list of
+#'   functions to extract model components. You can also extract whole fitted
+#'   model objects using \code{\link{from_zelig_model}}.
+#' @examples
+#' data(seatshare)
+#' subset <- seatshare[seatshare$country == "UNITED KINGDOM",]
+#' ts.out <- zelig(formula = unemp ~ leftseat, model = "ma", ts = "year", data = subset)
+#' summary(ts.out)
+#'
+#' @seealso Vignette: \url{http://docs.zeligproject.org/articles/zelig_ma.html}
+#' @import methods
+#' @export Zelig-ma
+#' @exportClass Zelig-ma
+#'
+#' @include model-zelig.R
+#' @include model-timeseries.R
+
+zma <- setRefClass("Zelig-ma",
+                       contains = "Zelig-timeseries")
+
+zma$methods(
+  initialize = function() {
+    callSuper()
+    .self$name <- "ma"
+    .self$link <- "identity"
+    .self$fn <- quote(zeligArimaWrapper)
+    .self$description = "Time-Series Model with Moving Average"
+    .self$packageauthors <- "R Core Team"
+    .self$outcome <- "continuous"
+    .self$wrapper <- "timeseries"
+  }
+)
diff --git a/R/model-mlogit-bayes.R b/R/model-mlogit-bayes.R
new file mode 100644
index 0000000..f5f2fb1
--- /dev/null
+++ b/R/model-mlogit-bayes.R
@@ -0,0 +1,143 @@
+#' Bayesian Multinomial Logistic Regression
+#'
+#' @param formula a symbolic representation of the model to be
+#'   estimated, in the form \code{y ~ x1 + x2}, where \code{y} is the
+#'   dependent variable and \code{x1} and \code{x2} are the explanatory
+#'   variables, and \code{y}, \code{x1}, and \code{x2} are contained in the
+#'   same dataset. (You may include more than two explanatory variables,
+#'   of course.) The \code{+} symbol means ``inclusion'' not
+#'   ``addition.'' You may also include interaction terms and main
+#'   effects in the form \code{x1*x2} without computing them in prior
+#'   steps; \code{I(x1*x2)} to include only the interaction term and
+#'   exclude the main effects; and quadratic terms in the form
+#'   \code{I(x1^2)}.
+#' @param model the name of a statistical model to estimate.
+#'   For a list of other supported models and their documentation see:
+#'   \url{http://docs.zeligproject.org/articles/}.
+#' @param data the name of a data frame containing the variables
+#'   referenced in the formula or a list of multiply imputed data frames
+#'   each having the same variable names and row numbers (created by
+#'   \code{Amelia} or \code{\link{to_zelig_mi}}).
+#' @param ... additional arguments passed to \code{zelig},
+#'   relevant for the model to be estimated.
+#' @param by a factor variable contained in \code{data}. If supplied,
+#'   \code{zelig} will subset
+#'   the data frame based on the levels in the \code{by} variable, and
+#'   estimate a model for each subset. This can save a considerable amount of
+#'   effort. You may also use \code{by} to run models using MatchIt
+#'   subclasses.
+#' @param cite If is set to 'TRUE' (default), the model citation will be printed
+#'   to the console.
+#' @examples
+#' data(mexico)
+#' z.out <- zelig(vote88 ~ pristr + othcok + othsocok,model = "mlogit.bayes",
+#' data = mexico,verbose = FALSE)
+#'
+#' @details
+#' zelig() accepts the following arguments for mlogit.bayes:
+#' \itemize{
+#'     \item \code{baseline}: either a character string or numeric value (equal to
+#'     one of the observed values in the dependent variable) specifying a baseline category.
+#'     The default value is NA which sets the baseline to the first alphabetical or
+#'     numerical unique value of the dependent variable.
+#' }
+#' Additional parameters avaialable to this model include:
+#' \itemize{
+#'   \item \code{weights}: vector of weight values or a name of a variable in the dataset
+#'   by which to weight the model. For more information see:
+#'   \url{http://docs.zeligproject.org/articles/weights.html}.
+#'   \item \code{burnin}: number of the initial MCMC iterations to be discarded (defaults to 1,000).
+#'   \item \code{mcmc}: number of the MCMC iterations after burnin (defaults to 10,000).
+#'   \item \code{mcmc.method}: either "MH" or "slice", specifying whether to use Metropolis Algorithm
+#'   or slice sampler. The default value is MH.
+#'   \item \code{thin}: thinning interval for the Markov chain. Only every thin-th draw from the Markov
+#'   chain is kept. The value of mcmc must be divisible by this value. The default value is 1.
+#'   \item \code{tune}: tuning parameter for the Metropolis-Hasting step, either a scalar or a numeric
+#'   vector (for kk coefficients, enter a kk vector). The tuning parameter should be set such
+#'   that the acceptance rate is satisfactory (between 0.2 and 0.5). The default value is 1.1.
+#'   \item \code{verbose}: defaults to FALSE. If TRUE, the progress of the sampler (every 10\%) is
+#'   printed to the screen.
+#'   \item \code{seed}: seed for the random number generator. The default is \code{NA} which corresponds
+#'   to a random seed of 12345.
+#'   \item \code{beta.start}: starting values for the Markov chain, either a scalar or vector with
+#'   length equal to the number of estimated coefficients. The default is \code{NA}, such
+#'   that the maximum likelihood estimates are used as the starting values.
+#' }
+#' Use the following parameters to specify the model's priors:
+#' \itemize{
+#'     \item \code{b0}: prior mean for the coefficients, either a numeric vector or a scalar.
+#'     If a scalar value, that value will be the prior mean for all the coefficients.
+#'     The default is 0.
+#'     \item \code{B0}: prior precision parameter for the coefficients, either a square
+#'     matrix (with the dimensions equal to the number of the coefficients) or a scalar.
+#'     If a scalar value, that value times an identity matrix will be the prior precision
+#'     parameter. The default is 0, which leads to an improper prior.
+#' }
+#' @return Depending on the class of model selected, \code{zelig} will return
+#'   an object with elements including \code{coefficients}, \code{residuals},
+#'   and \code{formula} which may be summarized using
+#'   \code{summary(z.out)} or individually extracted using, for example,
+#'   \code{coef(z.out)}. See
+#'   \url{http://docs.zeligproject.org/articles/getters.html} for a list of
+#'   functions to extract model components. You can also extract whole fitted
+#'   model objects using \code{\link{from_zelig_model}}.
+#'
+#' @seealso Vignette: \url{http://docs.zeligproject.org/articles/zelig_mlogitbayes.html}
+#' @import methods
+#' @export Zelig-mlogit-bayes
+#' @exportClass Zelig-mlogit-bayes
+#'
+#' @include model-zelig.R
+#' @include model-bayes.R
+
+zmlogitbayes <- setRefClass("Zelig-mlogit-bayes",
+                             contains = c("Zelig-bayes"))
+
+zmlogitbayes$methods(
+  initialize = function() {
+    callSuper()
+    .self$name <- "mlogit-bayes"
+    .self$year <- 2013
+    .self$category <- "discrete"
+    .self$authors <- "Ben Goodrich, Ying Lu"
+    .self$description = "Bayesian Multinomial Logistic Regression for Dependent Variables with Unordered Categorical Values"
+    .self$fn <- quote(MCMCpack::MCMCmnl)
+    # JSON from parent
+    .self$wrapper <- "mlogit.bayes"
+  }
+)
+
+zmlogitbayes$methods(
+  qi = function(simparam, mm) {
+    resp <- model.response(model.frame(.self$formula, data = .self$data))
+    level <- length(table(resp))
+    p <- dim(model.matrix(eval(.self$formula), data = .self$data))[2]
+    coef <- simparam
+    eta <- array(NA, c(nrow(coef), level, nrow(mm)))
+    eta[, 1, ] <- matrix(0, nrow(coef), nrow(mm))
+    for (j in 2:level) {
+      ind <- (1:p) * (level - 1) - (level - j)
+      eta[, j, ]<- coef[, ind] %*% t(mm)
+    }
+    eta <- exp(eta)
+    ev <- array(NA, c(nrow(coef), level, nrow(mm)))
+    pv <- matrix(NA, nrow(coef), nrow(mm))
+    colnames(ev) <- rep(NA, level)
+    for (k in 1:nrow(mm)) {
+      for (j in 1:level)
+        ev[, j, k] <- eta[, j, k] / rowSums(eta[, , k])
+    }
+    for (j in 1:level) {
+      colnames(ev)[j] <- paste("P(Y=", j, ")", sep="")
+    }
+    for (k in 1:nrow(mm)) {
+      probs <- as.matrix(ev[, , k])
+      temp <- apply(probs, 1, FUN = rmultinom, n = 1, size = 1)
+      temp <- as.matrix(t(temp) %*% (1:nrow(temp)))
+      pv <- apply(temp, 2, as.character)
+      pv <- as.factor(pv)
+    }
+    ev <- ev[, , 1]
+    return(list(ev = ev, pv = pv))
+  }
+)
diff --git a/R/model-mlogit.R b/R/model-mlogit.R
deleted file mode 100644
index 298e672..0000000
--- a/R/model-mlogit.R
+++ /dev/null
@@ -1,183 +0,0 @@
-#' Multinomial Logistic Regression for Dependent Variables with Unordered Categorical Values
-#'
-#' Vignette: \url{http://docs.zeligproject.org/articles/zeligchoice_mlogit.html}
-#' @import methods
-#' @export Zelig-bprobit
-#' @exportClass Zelig-bprobit
-
-zmlogit <- setRefClass("Zelig-mlogit",
-                          contains = "Zelig",
-                          field = list(family = "ANY",
-                                       linkinv = "function"
-                          ))
-
-zmlogit$methods(
-  initialize = function() {
-    callSuper()
-    .self$name <- "mlogit"
-    .self$description <- "Multinomial Logistic Regression for Dependent Variables with Unordered Categorical Values"
-    .self$fn <- quote(VGAM::vglm)
-    .self$authors <- "Matthew Owen, Olivia Lau, Kosuke Imai, Gary King"
-    .self$packageauthors <- "Thomas W. Yee"
-    .self$year <- 2007
-    .self$category <- "multinomial"
-    .self$family <- "multinomial"
-    .self$wrapper <- "mlogit"
-    .self$vignette.url <- "http://docs.zeligproject.org/articles/zeligchoice_mlogit.html"
-  }
-)
-
-zmlogit$methods(
-  zelig = function(formula, data, ..., weights = NULL, by = NULL, bootstrap = FALSE) {
-    .self$zelig.call <- match.call(expand.dots = TRUE)
-    .self$model.call <- match.call(expand.dots = TRUE)
-    .self$model.call$family <- .self$family
-    callSuper(formula = formula, data = data, ..., weights = NULL, by = by, bootstrap = bootstrap)
-  }
-)
-
-zmlogit$methods(
-  param = function(z.out, method="mvn") {
-    if(identical(method,"mvn")){
-      return(mvrnorm(.self$num, coef(z.out), vcov(z.out))) 
-    } else if(identical(method,"point")){
-      return(t(as.matrix(coef(z.out))))
-    } else {
-      stop("param called with method argument of undefined type.")
-    }
-  }
-)
-
-zmlogit$methods(
-  # From ZeligChoice 4
-  qi = function(simparam, mm) {
-    fitted <- .self$zelig.out$z.out[[1]]
-    # get constraints from fitted model
-    constraints <- fitted@constraints
-    coef <- simparam
-    ndim <- ncol(fitted@y) - 1
-    all.coef <- NULL
-    v <- construct.v(constraints, ndim)
-    # put all indexed lists in the appropriate section
-    for (i in 1:ndim)
-      all.coef <- c(all.coef, list(coef[, v[[i]]]))
-#     cnames <- ynames <-  if (is.null(colnames(fitted@y))) {1:(ndim + 1)} else colnames(fitted@y)
-    if (is.null(colnames(fitted@y))) {
-      cnames <- 1:(ndim + 1)
-    } else
-        cnames <- colnames(fitted@y)
-    ynames <- cnames
-    cnames <- paste("Pr(Y=", cnames, ")", sep = "")
-    ev <- ev.mlogit(fitted, constraints, all.coef, mm, ndim, cnames)
-    pv <- pv.mlogit(fitted, ev) #, ynames)
-    return(list(ev = ev, pv = pv))
-  }
-)
-
-
-#' Split Names of Vectors into N-vectors
-#' This function is used to organize how variables are spread
-#' across the list of formulas
-#' @usage construct.v(constraints, ndim)
-#' @param constraints a constraints object
-#' @param ndim an integer specifying the number of dimensions
-#' @return a list of character-vectors
-construct.v <- function(constraints, ndim) {
-  v <- rep(list(NULL), ndim)
-  names <- names(constraints)
-  for (i in 1:length(constraints)) {
-    cm <- constraints[[i]]
-    for (j in 1:ndim) {
-      if (sum(cm[j, ]) == 1) {
-        v[[j]] <- if (ncol(cm) == 1)
-          c(v[[j]], names[i])
-        else
-          c(v[[j]], paste(names[i], ':', j, sep=""))
-      }
-    }
-  }
-  return(v)
-}
-
-
-#' Simulate Expected Value for Multinomial Logit
-#' @usage ev.mlogit(fitted, constraints, all.coef, x, ndim, cnames)
-#' @param fitted a fitted model object
-#' @param constraints a constraints object
-#' @param all.coef all the coeficients
-#' @param x a setx object
-#' @param ndim an integer specifying the number of dimensions
-#' @param cnames a character-vector specifying the names of the columns
-#' @return a matrix of simulated values
-ev.mlogit <- function (fitted, constraints, all.coef, x, ndim, cnames) {
-  if (is.null(x))
-    return(NA)
-  linkinv <- fitted@family@linkinv
-  xm <- rep(list(NULL), ndim)
-  sim.eta <- NULL
-  x <- as.matrix(x)
-  for (i in 1:length(constraints)) {
-    for (j in 1:ndim)
-      if (sum(constraints[[i]][j,] ) == 1)
-        xm[[j]] <- c(xm[[j]], x[, names(constraints)[i]])
-  }
-  for (i in 1:ndim)
-    sim.eta <- cbind(sim.eta, all.coef[[i]] %*% as.matrix(xm[[i]]))
-  ev <- linkinv(sim.eta, extra = fitted@extra)
-  colnames(ev) <- cnames
-  return(ev)
-}
-
-#' Simulate Predicted Values
-#' @usage pv.mlogit(fitted, ev)
-#' @param fitted a fitted model object
-#' @param ev the simulated expected values
-#' @return a vector of simulated values
-pv.mlogit <- function (fitted, ev){ #, ynames) {
-  if (all(is.na(ev)))
-    return(NA)
-  # initialize predicted values and a matrix
-  pv <- NULL
-  Ipv <- sim.cut <- matrix(NA, nrow = nrow(ev), ncol(ev))
-  k <- ncol(ev)
-  colnames(Ipv) <- colnames(sim.cut) <- colnames(ev)
-  sim.cut[, 1] <- ev[, 1]
-  for (j in 2:k)
-    sim.cut[, j] <- sim.cut[ , j - 1] + ev[, j]
-  tmp <- runif(nrow(ev), min = 0, max = 1)
-  for (j in 1:k)
-    Ipv[, j] <- tmp > sim.cut[, j]
-  for (j in 1:nrow(Ipv))
-    pv[j] <- 1 + sum(Ipv[j, ])
-  pv <- factor(pv, ordered = FALSE)
-  pv.matrix <- matrix(pv, nrow = dim(ev)[1])
-  levels(pv.matrix) <- levels(pv)
-  return(pv.matrix)
-}
-
-zmlogit$methods(
-  mcfun = function(x, b0=-0.5, b1=0.5, b2=-1, b3=1, ..., sim=TRUE){
-    mu1 <- b0 + b1 * x
-    mu2 <- b2 + b3 * x
-
-    if(sim){
-      n.sim = length(x)
-      y.star.1 <- exp( rlogis(n = n.sim, location = mu1, scale = 1) ) # latent continuous y
-      y.star.2 <- exp( rlogis(n = n.sim, location = mu2, scale = 1) ) # latent continuous y
-      pi1 <- y.star.1/(1 + y.star.1 + y.star.2)
-      pi2 <- y.star.2/(1 + y.star.1 + y.star.2)
-      pi3 <- 1 - pi1 - pi2
-
-      y.draw <- runif(n=n.sim)
-      y.obs <- 1 + as.numeric(y.draw>pi1) + as.numeric(y.draw>(pi1 + pi2))
-      return(as.factor(y.obs))
-    }else{
-      pi1.hat <- exp(mu1)/(1 + exp(mu1) + exp(mu2))
-      pi2.hat <- exp(mu2)/(1 + exp(mu1) + exp(mu2))
-      pi3.hat <- 1 - pi1.hat - pi2.hat
-      
-      y.obs.hat <- pi1.hat*1 + pi2.hat*2 + pi3.hat*3    # This is the expectation the MC test will check, although it is not substantively meaningful for factor dep. var.
-      return(y.obs.hat)
-    }
-  }
-)
diff --git a/R/model-negbinom.R b/R/model-negbinom.R
new file mode 100755
index 0000000..fb938a8
--- /dev/null
+++ b/R/model-negbinom.R
@@ -0,0 +1,137 @@
+#' Negative Binomial Regression for Event Count Dependent Variables
+#'@param formula a symbolic representation of the model to be
+#'   estimated, in the form \code{y ~ x1 + x2}, where \code{y} is the
+#'   dependent variable and \code{x1} and \code{x2} are the explanatory
+#'   variables, and \code{y}, \code{x1}, and \code{x2} are contained in the
+#'   same dataset. (You may include more than two explanatory variables,
+#'   of course.) The \code{+} symbol means ``inclusion'' not
+#'   ``addition.'' You may also include interaction terms and main
+#'   effects in the form \code{x1*x2} without computing them in prior
+#'   steps; \code{I(x1*x2)} to include only the interaction term and
+#'   exclude the main effects; and quadratic terms in the form
+#'   \code{I(x1^2)}.
+#' @param model the name of a statistical model to estimate.
+#'   For a list of other supported models and their documentation see:
+#'   \url{http://docs.zeligproject.org/articles/}.
+#' @param data the name of a data frame containing the variables
+#'   referenced in the formula or a list of multiply imputed data frames
+#'   each having the same variable names and row numbers (created by
+#'   \code{Amelia} or \code{\link{to_zelig_mi}}).
+#' @param ... additional arguments passed to \code{zelig},
+#'   relevant for the model to be estimated.
+#' @param by a factor variable contained in \code{data}. If supplied,
+#'   \code{zelig} will subset
+#'   the data frame based on the levels in the \code{by} variable, and
+#'   estimate a model for each subset. This can save a considerable amount of
+#'   effort. You may also use \code{by} to run models using MatchIt
+#'   subclasses.
+#' @param cite If is set to 'TRUE' (default), the model citation will be printed
+#'   to the console.
+#'
+#' @details
+#' Additional parameters avaialable to this model include:
+#' \itemize{
+#'   \item \code{weights}: vector of weight values or a name of a variable in the dataset
+#'   by which to weight the model. For more information see:
+#'   \url{http://docs.zeligproject.org/articles/weights.html}.
+#'   \item \code{bootstrap}: logical or numeric. If \code{FALSE} don't use bootstraps to
+#'   robustly estimate uncertainty around model parameters due to sampling error.
+#'   If an integer is supplied, the number of boostraps to run.
+#'   For more information see:
+#'   \url{http://docs.zeligproject.org/articles/bootstraps.html}.
+#' }
+#' @return Depending on the class of model selected, \code{zelig} will return
+#'   an object with elements including \code{coefficients}, \code{residuals},
+#'   and \code{formula} which may be summarized using
+#'   \code{summary(z.out)} or individually extracted using, for example,
+#'   \code{coef(z.out)}. See
+#'   \url{http://docs.zeligproject.org/articles/getters.html} for a list of
+#'   functions to extract model components. You can also extract whole fitted
+#'   model objects using \code{\link{from_zelig_model}}.
+#'
+#'@examples
+#' library(Zelig)
+#' data(sanction)
+#' z.out <- zelig(num ~ target + coop, model = "negbin", data = sanction)
+#' summary(z.out)
+#'
+#' @seealso Vignette: \url{http://docs.zeligproject.org/articles/zelig_negbin.html}
+#' @import methods
+#' @export Zelig-negbin
+#' @exportClass Zelig-negbin
+#'
+#' @include model-zelig.R
+
+znegbin <- setRefClass("Zelig-negbin",
+                         contains = "Zelig",
+                         field = list(simalpha = "list" # ancillary parameters
+                         ))
+
+znegbin$methods(
+  initialize = function() {
+    callSuper()
+    .self$fn <- quote(MASS::glm.nb)
+    .self$name <- "negbin"
+    .self$authors <- "Kosuke Imai, Gary King, Olivia Lau"
+    .self$packageauthors <- "William N. Venables, and Brian D. Ripley"
+    .self$year <- 2008
+    .self$category <- "count"
+    .self$description <- "Negative Binomial Regression for Event Count Dependent Variables"
+    # JSON
+    .self$outcome <- "discrete"
+    .self$wrapper <- "negbin"
+    .self$acceptweights <- TRUE
+  }
+)
+
+znegbin$methods(
+  zelig = function(formula, data, ..., weights=NULL, by = NULL, bootstrap = FALSE) {
+    .self$zelig.call <- match.call(expand.dots = TRUE)
+    .self$model.call <- .self$zelig.call
+    callSuper(formula=formula, data=data, ..., weights=weights, by = by, bootstrap = bootstrap)
+    rse <- lapply(.self$zelig.out$z.out, (function(x) vcovHC(x, type = "HC0")))
+    .self$test.statistics<- list(robust.se = rse)
+  }
+)
+
+znegbin$methods(
+  param = function(z.out, method="mvn") {
+    simalpha.local <- z.out$theta
+    if(identical(method,"mvn")){
+      simparam.local <- mvrnorm(n = .self$num, mu = coef(z.out),
+                        Sigma = vcov(z.out))
+      simparam.local <- list(simparam = simparam.local, simalpha = simalpha.local)
+      return(simparam.local)
+    } else if(identical(method,"point")){
+      return(list(simparam = t(as.matrix(coef(z.out))), simalpha = simalpha.local))
+    }
+  }
+)
+
+znegbin$methods(
+  qi = function(simparam, mm) {
+    coeff <- simparam$simparam
+    alpha <- simparam$simalpha
+    inverse <- family(.self$zelig.out$z.out[[1]])$linkinv
+    eta <- coeff %*% t(mm)
+    theta <- matrix(inverse(eta), nrow=nrow(coeff))
+    ev <- theta
+    pv <- matrix(NA, nrow=nrow(theta), ncol=ncol(theta))
+    #
+    for (i in 1:ncol(ev))
+      pv[, i] <- rnegbin(nrow(ev), mu = ev[i, ], theta = alpha[i])
+    return(list(ev  = ev, pv = pv))
+  }
+)
+
+znegbin$methods(
+  mcfun = function(x, b0=0, b1=1, ..., sim=TRUE){
+    mu <- exp(b0 + b1 * x)
+    if(sim){
+        y <- rnbinom(n=length(x), 1, mu=mu)
+        return(y)
+    }else{
+        return(mu)
+    }
+  }
+)
diff --git a/R/model-normal-bayes.R b/R/model-normal-bayes.R
new file mode 100644
index 0000000..d7d2c25
--- /dev/null
+++ b/R/model-normal-bayes.R
@@ -0,0 +1,135 @@
+#' Bayesian Normal Linear Regression
+#'
+#' @param formula a symbolic representation of the model to be
+#'   estimated, in the form \code{y ~ x1 + x2}, where \code{y} is the
+#'   dependent variable and \code{x1} and \code{x2} are the explanatory
+#'   variables, and \code{y}, \code{x1}, and \code{x2} are contained in the
+#'   same dataset. (You may include more than two explanatory variables,
+#'   of course.) The \code{+} symbol means ``inclusion'' not
+#'   ``addition.'' You may also include interaction terms and main
+#'   effects in the form \code{x1*x2} without computing them in prior
+#'   steps; \code{I(x1*x2)} to include only the interaction term and
+#'   exclude the main effects; and quadratic terms in the form
+#'   \code{I(x1^2)}.
+#' @param model the name of a statistical model to estimate.
+#'   For a list of other supported models and their documentation see:
+#'   \url{http://docs.zeligproject.org/articles/}.
+#' @param data the name of a data frame containing the variables
+#'   referenced in the formula or a list of multiply imputed data frames
+#'   each having the same variable names and row numbers (created by
+#'   \code{Amelia} or \code{\link{to_zelig_mi}}).
+#' @param ... additional arguments passed to \code{zelig},
+#'   relevant for the model to be estimated.
+#' @param by a factor variable contained in \code{data}. If supplied,
+#'   \code{zelig} will subset
+#'   the data frame based on the levels in the \code{by} variable, and
+#'   estimate a model for each subset. This can save a considerable amount of
+#'   effort. You may also use \code{by} to run models using MatchIt
+#'   subclasses.
+#' @param cite If is set to 'TRUE' (default), the model citation will be printed
+#'   to the console.
+#'
+#' @examples
+#' data(macro)
+#' z.out <- zelig(unem ~ gdp + capmob + trade, model = "normal.bayes", data = macro, verbose = FALSE)
+#'
+#' @details
+#' Additional parameters avaialable to many models include:
+#' \itemize{
+#'   \item \code{weights}: vector of weight values or a name of a variable in the dataset
+#'   by which to weight the model. For more information see:
+#'   \url{http://docs.zeligproject.org/articles/weights.html}.
+#'   \item \code{burnin}: number of the initial MCMC iterations to be discarded (defaults to 1,000).
+#'   \item \code{mcmc}: number of the MCMC iterations after burnin (defaults to 10,000).
+#'   \item \code{thin}: thinning interval for the Markov chain. Only every thin-th draw from the Markov chain is kept. The value of mcmc must be divisible by this value. The default value is 1.
+#'   \item \code{verbose}: defaults to FALSE. If TRUE, the progress of the sampler (every 10\%) is printed to the screen.
+#'   \item \code{seed}: seed for the random number generator. The default is \code{NA} which corresponds to a random seed of 12345.
+#'   \item \code{beta.start}: starting values for the Markov chain, either a scalar or vector with length equal to the number of estimated coefficients. The default is \code{NA}, such that the maximum likelihood estimates are used as the starting values.
+#' }
+#' Use the following parameters to specify the model's priors:
+#' \itemize{
+#'     \item \code{b0}: prior mean for the coefficients, either a numeric vector or a scalar. If a scalar value, that value will be the prior mean for all the coefficients. The default is 0.
+#'     \item \code{B0}: prior precision parameter for the coefficients, either a square matrix (with the dimensions equal to the number of the coefficients) or a scalar. If a scalar value, that value times an identity matrix will be the prior precision parameter. The default is 0, which leads to an improper prior.
+#'     \item \code{c0}: c0/2 is the shape parameter for the Inverse Gamma prior on the variance of the disturbance terms.
+#'     \item \code{d0}: d0/2 is the scale parameter for the Inverse Gamma prior on the variance of the disturbance terms.
+#' }
+#' @return Depending on the class of model selected, \code{zelig} will return
+#'   an object with elements including \code{coefficients}, \code{residuals},
+#'   and \code{formula} which may be summarized using
+#'   \code{summary(z.out)} or individually extracted using, for example,
+#'   \code{coef(z.out)}. See
+#'   \url{http://docs.zeligproject.org/articles/getters.html} for a list of
+#'   functions to extract model components. You can also extract whole fitted
+#'   model objects using \code{\link{from_zelig_model}}.
+#' @examples
+#'
+#' data(macro)
+#' z.out <- zelig(unem ~ gdp + capmob + trade, model = "normal.bayes",
+#' data = macro, verbose = FALSE)
+#'
+#' z.out$geweke.diag()
+#' z.out$heidel.diag()
+#' z.out$raftery.diag()
+#' summary(z.out)
+#'
+#' x.out <- setx(z.out)
+#' s.out1 <- sim(z.out, x = x.out)
+#' summary(s.out1)
+#'
+#' x.high <- setx(z.out, trade = quantile(macro$trade, prob = 0.8))
+#' x.low <- setx(z.out, trade = quantile(macro$trade, prob = 0.2))
+#'
+#' s.out2 <- sim(z.out, x = x.high, x1 = x.low)
+#' summary(s.out2)
+#'
+#' @seealso Vignette: \url{http://docs.zeligproject.org/articles/zelig_normalbayes.html}
+#' @import methods
+#' @export Zelig-normal-bayes
+#' @exportClass Zelig-normal-bayes
+#'
+#' @include model-zelig.R
+#' @include model-bayes.R
+#' @include model-normal.R
+
+znormalbayes <- setRefClass("Zelig-normal-bayes",
+                             contains = c("Zelig-bayes",
+                                          "Zelig-normal"))
+
+znormalbayes$methods(
+  initialize = function() {
+    callSuper()
+    .self$name <- "normal-bayes" # CC: should't it be lsbayes?
+    .self$year <- 2013
+    .self$category <- "continuous"
+    .self$authors <- "Ben Goodrich, Ying Lu"
+    .self$description = "Bayesian Normal Linear Regression"
+    .self$fn <- quote(MCMCpack::MCMCregress)
+    # JSON from parent
+    .self$wrapper <- "normal.bayes"
+  }
+)
+
+znormalbayes$methods(
+  qi = function(simparam, mm) {
+    # Extract simulated parameters and get column names
+    coef <- simparam
+    cols <- colnames(coef)
+    # Place the simulated variances in their own vector
+    sigma2 <- coef[, ncol(coef)]
+    # Remove the "sigma2" (variance) parameter
+    # which should already be placed
+    # in the simulated parameters
+    cols <- cols[ ! "sigma2" == cols ]
+    coef <- coef[, cols]
+    ev <- coef %*% t(mm)
+    pv <- matrix(rnorm(nrow(ev), ev, sqrt(sigma2)))
+    return(list(ev = ev, pv = pv))
+  }
+)
+
+znormalbayes$methods(
+  mcfun = function(x, b0=0, b1=1, alpha=1, sim=TRUE){
+    y <- b0 + b1*x + sim * rnorm(n=length(x), sd=alpha)
+    return(y)
+  }
+)
diff --git a/R/model-normal-gee.R b/R/model-normal-gee.R
new file mode 100755
index 0000000..e7a85da
--- /dev/null
+++ b/R/model-normal-gee.R
@@ -0,0 +1,99 @@
+#' Generalized Estimating Equation for Normal Regression
+#'
+#'@param formula a symbolic representation of the model to be
+#'   estimated, in the form \code{y ~ x1 + x2}, where \code{y} is the
+#'   dependent variable and \code{x1} and \code{x2} are the explanatory
+#'   variables, and \code{y}, \code{x1}, and \code{x2} are contained in the
+#'   same dataset. (You may include more than two explanatory variables,
+#'   of course.) The \code{+} symbol means ``inclusion'' not
+#'   ``addition.'' You may also include interaction terms and main
+#'   effects in the form \code{x1*x2} without computing them in prior
+#'   steps; \code{I(x1*x2)} to include only the interaction term and
+#'   exclude the main effects; and quadratic terms in the form
+#'   \code{I(x1^2)}.
+#'@param model the name of a statistical model to estimate.
+#'   For a list of other supported models and their documentation see:
+#'   \url{http://docs.zeligproject.org/articles/}.
+#'@param data the name of a data frame containing the variables
+#'   referenced in the formula or a list of multiply imputed data frames
+#'   each having the same variable names and row numbers (created by
+#'   \code{Amelia} or \code{\link{to_zelig_mi}}).
+#'@param ... additional arguments passed to \code{zelig},
+#'   relevant for the model to be estimated.
+#'@param by a factor variable contained in \code{data}. If supplied,
+#'   \code{zelig} will subset
+#'   the data frame based on the levels in the \code{by} variable, and
+#'   estimate a model for each subset. This can save a considerable amount of
+#'   effort. You may also use \code{by} to run models using MatchIt
+#'   subclasses.
+#'@param cite If is set to 'TRUE' (default), the model citation will be printed
+#'  to the console.
+#' @param robust defaults to TRUE. If TRUE, consistent standard errors are estimated using a "sandwich"
+#' estimator.
+#'@param corstr defaults to "independence". It can take on the following arguments:
+#'@param Independence (corstr = independence): cor(yit,yit')=0, for all t,t' with t not equal to t'.
+#' It assumes that there is no correlation within the clusters and the model becomes equivalent
+#'  to standard normal regression. The "working" correlation matrix is the identity matrix.
+#'@param Fixed corstr = fixed): If selected, the user must define the "working" correlation
+#'matrix with the R argument rather than estimating it from the model.
+#'@param id: where id is a variable which identifies the clusters. The data should be sorted by
+#'id and should be ordered within each cluster when appropriate
+#'@param corstr: character string specifying the correlation structure: "independence",
+#'"exchangeable", "ar1", "unstructured" and "userdefined"
+#'@param geeglm: See geeglm in package geepack for other function arguments
+#'@param Mv: defaults to 1. It specifies the number of periods of correlation and
+#' only needs to be specified when \code{corstr} is stat_M_dep, non_stat_M_dep, or AR-M.
+#'@param R: defaults to NULL. It specifies a user-defined correlation matrix rather than
+#' estimating it from the data. The argument is used only when corstr is "fixed". The input is a TxT
+#' matrix of correlations, where T is the size of the largest cluster.
+#' @details
+#' Additional parameters avaialable to this model include:
+#' \itemize{
+#'   \item \code{weights}: vector of weight values or a name of a variable in the dataset
+#'   by which to weight the model. For more information see:
+#'   \url{http://docs.zeligproject.org/articles/weights.html}.
+#' }
+#' @return Depending on the class of model selected, \code{zelig} will return
+#'   an object with elements including \code{coefficients}, \code{residuals},
+#'   and \code{formula} which may be summarized using
+#'   \code{summary(z.out)} or individually extracted using, for example,
+#'   \code{coef(z.out)}. See
+#'   \url{http://docs.zeligproject.org/articles/getters.html} for a list of
+#'   functions to extract model components. You can also extract whole fitted
+#'   model objects using \code{\link{from_zelig_model}}.
+#'
+#' @examples
+#' library(Zelig)
+#' data(macro)
+#' z.out <- zelig(unem ~ gdp + capmob + trade, model ="normal.gee", id = "country",
+#'         data = macro, corstr = "AR-M")
+#' summary(z.out)
+
+#' @seealso Vignette: \url{http://docs.zeligproject.org/articles/zelig_normalgee.html}
+#' @import methods
+#' @export Zelig-normal-gee
+#' @exportClass Zelig-normal-gee
+#'
+#' @include model-zelig.R
+#' @include model-gee.R
+#' @include model-normal.R
+
+znormalgee <- setRefClass("Zelig-normal-gee",
+                           contains = c("Zelig-gee", "Zelig-normal"))
+
+znormalgee$methods(
+  initialize = function() {
+    callSuper()
+    .self$name <- "normal-gee"
+    .self$family <- "gaussian"
+    .self$link <- "identity"
+    .self$linkinv <- eval(call(.self$family, .self$link))$linkinv
+    .self$year <- 2011
+    .self$category <- "continuous"
+    .self$authors <- "Patrick Lam"
+    .self$description = "General Estimating Equation for Normal Regression"
+    .self$fn <- quote(geepack::geeglm)
+    # JSON from parent
+    .self$wrapper <- "normal.gee"
+  }
+)
diff --git a/R/model-normal-survey.R b/R/model-normal-survey.R
new file mode 100755
index 0000000..a2de438
--- /dev/null
+++ b/R/model-normal-survey.R
@@ -0,0 +1,129 @@
+#' Normal Regression for Continuous Dependent Variables with Survey Weights
+#'@param formula a symbolic representation of the model to be
+#'   estimated, in the form \code{y \~\, x1 + x2}, where \code{y} is the
+#'   dependent variable and \code{x1} and \code{x2} are the explanatory
+#'   variables, and \code{y}, \code{x1}, and \code{x2} are contained in the
+#'   same dataset. (You may include more than two explanatory variables,
+#'   of course.) The \code{+} symbol means ``inclusion'' not
+#'   ``addition.'' You may also include interaction terms and main
+#'   effects in the form \code{x1*x2} without computing them in prior
+#'   steps; \code{I(x1*x2)} to include only the interaction term and
+#'   exclude the main effects; and quadratic terms in the form
+#'   \code{I(x1^2)}.
+#'@param model the name of a statistical model to estimate.
+#'   For a list of other supported models and their documentation see:
+#'   \url{http://docs.zeligproject.org/articles/}.
+#'@param data the name of a data frame containing the variables
+#'   referenced in the formula or a list of multiply imputed data frames
+#'   each having the same variable names and row numbers (created by
+#'   \code{Amelia} or \code{\link{to_zelig_mi}}).
+#'@param ... additional arguments passed to \code{zelig},
+#'   relevant for the model to be estimated.
+#'@param by a factor variable contained in \code{data}. If supplied,
+#'   \code{zelig} will subset
+#'   the data frame based on the levels in the \code{by} variable, and
+#'   estimate a model for each subset. This can save a considerable amount of
+#'   effort. You may also use \code{by} to run models using MatchIt
+#'   subclasses.
+#'@param cite If is set to 'TRUE' (default), the model citation will be printed
+#'   to the console.
+#'
+#' @details
+#' Additional parameters avaialable to this model include:
+#' \itemize{
+#'   \item \code{weights}: vector of weight values or a name of a variable in the dataset
+#'   by which to weight the model. For more information see:
+#'   \url{http://docs.zeligproject.org/articles/weights.html}.
+#'   \item \code{bootstrap}: logical or numeric. If \code{FALSE} don't use bootstraps to
+#'   robustly estimate uncertainty around model parameters due to sampling error.
+#'   If an integer is supplied, the number of boostraps to run.
+#'   For more information see:
+#'   \url{http://docs.zeligproject.org/articles/bootstraps.html}.
+#' }
+#' @return Depending on the class of model selected, \code{zelig} will return
+#'   an object with elements including \code{coefficients}, \code{residuals},
+#'   and \code{formula} which may be summarized using
+#'   \code{summary(z.out)} or individually extracted using, for example,
+#'   \code{coef(z.out)}. See
+#'   \url{http://docs.zeligproject.org/articles/getters.html} for a list of
+#'   functions to extract model components. You can also extract whole fitted
+#'   model objects using \code{\link{from_zelig_model}}.
+#'
+#' @examples
+#' library(Zelig)
+#' data(api, package = "survey")
+#' z.out1 <- zelig(api00 ~ meals + yr.rnd, model = "normal.survey",eights = ~pw, data = apistrat)
+#' summary(z.out1)
+#'
+#' @seealso Vignette: \url{http://docs.zeligproject.org/articles/zelig_normalsurvey.html}
+#' @import methods
+#' @export Zelig-normal
+#' @exportClass Zelig-normal
+#'
+#' @include model-zelig.R
+#' @include model-survey.R
+#' @include model-normal.R
+
+
+znormalsurvey <- setRefClass("Zelig-normal-survey",
+                       contains = c("Zelig-survey"),
+                       fields = list(family = "character",
+                                  link = "character",
+                                  linkinv = "function"))
+                                  #, "Zelig-normal"))
+
+znormalsurvey$methods(
+  initialize = function() {
+    callSuper()
+    .self$name <- "normal-survey"
+    .self$family <- "gaussian"
+    .self$link <- "identity"
+    .self$linkinv <- eval(call(.self$family, .self$link))$linkinv
+    .self$category <- "continuous"
+    .self$description <- "Normal Regression for Continuous Dependent Variables with Survey Weights"
+    .self$outcome <- "continuous"
+    # JSON
+    .self$wrapper <- "normal.survey"
+  }
+)
+
+znormalsurvey$methods(
+  param = function(z.out, method="mvn") {
+    degrees.freedom <- z.out$df.residual
+    sig2 <- base::summary(z.out)$dispersion # not to call class summary method
+    simalpha <- sqrt(degrees.freedom * sig2
+                     / rchisq(.self$num, degrees.freedom))
+
+    if(identical(method,"mvn")){
+      simparam.local <- mvrnorm(n = .self$num,
+                              mu = coef(z.out),
+                              Sigma = vcov(z.out))
+      simparam.local <- list(simparam = simparam.local, simalpha = simalpha)
+      return(simparam.local)
+    } else if(identical(method,"point")){
+      return(list(simparam = t(as.matrix(coef(z.out))), simalpha = simalpha))
+    }
+
+  }
+)
+
+znormalsurvey$methods(
+  qi = function(simparam, mm) {
+    theta <- matrix(simparam$simparam %*% t(mm),
+                    nrow = nrow(simparam$simparam))
+    ev <- theta
+    pv <- matrix(NA, nrow = nrow(theta), ncol = ncol(theta))
+    for (j in 1:nrow(ev))
+      pv[j, ] <- rnorm(ncol(ev),
+                       mean = ev[j, ],
+                       sd = simparam$simalpha[j])
+    return(list(ev = ev, pv = pv))
+  }
+)
+
+znormalsurvey$methods(
+  mcfun = function(x, b0=0, b1=1, alpha=1, sim=TRUE){
+    y <- b0 + b1*x + sim * rnorm(n=length(x), sd=alpha)
+    return(y)
+  }
+)
diff --git a/R/model-normal.R b/R/model-normal.R
new file mode 100755
index 0000000..09ee98c
--- /dev/null
+++ b/R/model-normal.R
@@ -0,0 +1,138 @@
+#' Normal Regression for Continuous Dependent Variables
+#'@param formula a symbolic representation of the model to be
+#'   estimated, in the form \code{y ~ x1 + x2}, where \code{y} is the
+#'   dependent variable and \code{x1} and \code{x2} are the explanatory
+#'   variables, and \code{y}, \code{x1}, and \code{x2} are contained in the
+#'   same dataset. (You may include more than two explanatory variables,
+#'   of course.) The \code{+} symbol means ``inclusion'' not
+#'   ``addition.'' You may also include interaction terms and main
+#'   effects in the form \code{x1*x2} without computing them in prior
+#'   steps; \code{I(x1*x2)} to include only the interaction term and
+#'   exclude the main effects; and quadratic terms in the form
+#'   \code{I(x1^2)}.
+#'@param model the name of a statistical model to estimate.
+#'   For a list of other supported models and their documentation see:
+#'   \url{http://docs.zeligproject.org/articles/}.
+#'@param data the name of a data frame containing the variables
+#'   referenced in the formula or a list of multiply imputed data frames
+#'   each having the same variable names and row numbers (created by
+#'   \code{Amelia} or \code{\link{to_zelig_mi}}).
+#'@param ... additional arguments passed to \code{zelig},
+#'   relevant for the model to be estimated.
+#'@param by a factor variable contained in \code{data}. If supplied,
+#'   \code{zelig} will subset
+#'   the data frame based on the levels in the \code{by} variable, and
+#'   estimate a model for each subset. This can save a considerable amount of
+#'   effort. You may also use \code{by} to run models using MatchIt
+#'   subclasses.
+#'@param cite If is set to 'TRUE' (default), the model citation will be printed
+#'   to the console.
+#'
+#' @details
+#' Additional parameters avaialable to this model include:
+#' \itemize{
+#'   \item \code{weights}: vector of weight values or a name of a variable in the dataset
+#'   by which to weight the model. For more information see:
+#'   \url{http://docs.zeligproject.org/articles/weights.html}.
+#'   \item \code{bootstrap}: logical or numeric. If \code{FALSE} don't use bootstraps to
+#'   robustly estimate uncertainty around model parameters due to sampling error.
+#'   If an integer is supplied, the number of boostraps to run.
+#'   For more information see:
+#'   \url{http://docs.zeligproject.org/articles/bootstraps.html}.
+#' }
+#'
+#' @return Depending on the class of model selected, \code{zelig} will return
+#'   an object with elements including \code{coefficients}, \code{residuals},
+#'   and \code{formula} which may be summarized using
+#'   \code{summary(z.out)} or individually extracted using, for example,
+#'   \code{coef(z.out)}. See
+#'   \url{http://docs.zeligproject.org/articles/getters.html} for a list of
+#'   functions to extract model components. You can also extract whole fitted
+#'   model objects using \code{\link{from_zelig_model}}.
+#'@param below (defaults to 0) The point at which the dependent variable is censored from below. If any values in the dependent variable are observed to be less than the censoring point, it is assumed that that particular observation is censored from below at the observed value. (See for a Bayesian implementation that supports both left and right censoring.)
+#'@param robust defaults to FALSE. If TRUE, zelig() computes robust standard errors based on sandwich estimators (see and ) and the options selected in cluster.
+#'@param if robust = TRUE, you may select a variable to define groups of correlated observations. Let x3 be a variable that consists of either discrete numeric values, character strings, or factors that define strata. Then
+#'z.out <- zelig(y ~ x1 + x2, robust = TRUE, cluster = "x3", model = "tobit", data = mydata)
+#'means that the observations can be correlated within the strata defined by the variable x3, and that robust standard errors should be calculated according to those clusters. If robust = TRUE but cluster is not specified, zelig() assumes that each observation falls into its own cluster.
+#'@param formula a model fitting formula
+#'
+#'@examples
+#' data(macro)
+#' z.out1 <- zelig(unem ~ gdp + capmob + trade, model = "normal",
+#' data = macro)
+#' summary(z.out1)
+#' x.high <- setx(z.out1, trade = quantile(macro$trade, 0.8))
+#' x.low <- setx(z.out1, trade = quantile(macro$trade, 0.2))
+#' s.out1 <- sim(z.out1, x = x.high, x1 = x.low)
+#' summary(s.out1)
+#' plot(s.out1)
+#'
+#'
+#'@seealso Vignette: \url{http://docs.zeligproject.org/articles/zelig_normal.html}
+#' @import methods
+#' @export Zelig-normal
+#' @exportClass Zelig-normal
+#'
+#' @include model-zelig.R
+#' @include model-glm.R
+
+znormal <- setRefClass("Zelig-normal",
+                       contains = "Zelig-glm")
+
+znormal$methods(
+  initialize = function() {
+    callSuper()
+    .self$name <- "normal"
+    .self$family <- "gaussian"
+    .self$link <- "identity"
+    .self$linkinv <- eval(call(.self$family, .self$link))$linkinv
+    .self$authors <- "Kosuke Imai, Gary King, Olivia Lau"
+    .self$year <- 2008
+    .self$category <- "continuous"
+    .self$description <- "Normal Regression for Continuous Dependent Variables"
+    # JSON
+    .self$outcome <- "continuous"
+    .self$wrapper <- "normal"
+  }
+)
+
+znormal$methods(
+  param = function(z.out, method="mvn") {
+    degrees.freedom <- z.out$df.residual
+    sig2 <- base::summary(z.out)$dispersion # not to call class summary method
+    simalpha <- sqrt(degrees.freedom * sig2
+                     / rchisq(.self$num, degrees.freedom))
+
+    if(identical(method,"mvn")){
+      simparam.local <- mvrnorm(n = .self$num,
+                              mu = coef(z.out),
+                              Sigma = vcov(z.out))
+      simparam.local <- list(simparam = simparam.local, simalpha = simalpha)
+      return(simparam.local)
+    } else if(identical(method,"point")){
+      return(list(simparam = t(as.matrix(coef(z.out))), simalpha = simalpha))
+    }
+
+  }
+)
+
+znormal$methods(
+  qi = function(simparam, mm) {
+    theta <- matrix(simparam$simparam %*% t(mm),
+                    nrow = nrow(simparam$simparam))
+    ev <- theta
+    pv <- matrix(NA, nrow = nrow(theta), ncol = ncol(theta))
+    for (j in 1:nrow(ev))
+      pv[j, ] <- rnorm(ncol(ev),
+                       mean = ev[j, ],
+                       sd = simparam$simalpha[j])
+    return(list(ev = ev, pv = pv))
+  }
+)
+
+znormal$methods(
+  mcfun = function(x, b0=0, b1=1, alpha=1, sim=TRUE){
+    y <- b0 + b1*x + sim * rnorm(n=length(x), sd=alpha)
+    return(y)
+  }
+)
diff --git a/R/model-obinchoice.R b/R/model-obinchoice.R
deleted file mode 100644
index 5a904e8..0000000
--- a/R/model-obinchoice.R
+++ /dev/null
@@ -1,117 +0,0 @@
-#' Ordered Choice object for inheritance across models in ZeligChoice
-#'
-#' @import methods
-#' @export Zelig-obinchoice
-#' @exportClass Zelig-obinchoice
-
-
-
-
-zobinchoice <- setRefClass("Zelig-obinchoice",
-                           contains = "Zelig",
-                           field = list(method = "character",
-                                        linkinv = "function"
-                           ))
-
-zobinchoice$methods(
-  initialize = function() {
-    callSuper()
-    .self$fn <- quote(MASS::polr)
-    .self$authors <- "Matthew Owen, Olivia Lau, Kosuke Imai, Gary King"
-
-    .self$year <- 2011
-    .self$category <- "multinomial"
-  }
-)
-
-zobinchoice$methods(
-  zelig = function(formula, data, ..., weights = NULL, by = NULL,
-                   bootstrap = FALSE) {
-    .self$zelig.call <- match.call(expand.dots = TRUE)
-    .self$model.call <- match.call(expand.dots = TRUE)
-    .self$model.call$method <- .self$method
-    .self$model.call$Hess <- TRUE
-    localformula <- update(formula, as.factor(.) ~ .)
-    if (!is.null(weights)) 
-        message('Note: Zelig weight results may differ from those in MASS::polr.')
-    callSuper(formula = localformula, data = data, ..., weights = weights, 
-              by = by, bootstrap = bootstrap)
-
-    #rse<-plyr::llply(.self$zelig.out$z.out, (function(x) vcovHC(x,type="HC0")))
-    #.self$test.statistics<- list(robust.se = rse)
-  }
-)
-
-zobinchoice$methods(
-  param = function(z.out, method="mvn") {
-    coef <- coef(z.out)
-    zeta <- z.out$zeta
-    theta <- zeta[1]
-    for (k in 2:length(zeta))
-      theta[k] <- log(zeta[k] - zeta[k - 1])
-    simalpha <- list(coef = coef, zeta = zeta, lev = z.out$lev)
-
-    if(identical(method, "mvn")){
-      localsimparam <- mvrnorm(.self$num, c(coef, theta), vcov(z.out))
-      return(list(simparam = localsimparam, simalpha = simalpha))
-    }else if(identical(method, "point")){
-      return(list(simparam =t(as.matrix(c(coef, theta))), simalpha = simalpha))
-    }
-  }
-)
-
-zobinchoice$methods(
-  # From ZeligChoice 4
-  qi = function(simparam, mm) {
-    # startup work
-    simulations <- simparam$simparam
-    coef <- simparam$simalpha$coef
-    zeta <- simparam$simalpha$zeta
-    lev <- simparam$simalpha$lev
-    # simulations on coefficients
-    sim.coef <- simulations[, 1:length(coef), drop = FALSE]
-    # remove (Intercept), make sure matrix is numeric
-    mat <- as.numeric(as.matrix(mm)[, -1])
-    # compute eta
-    eta <- t(mat %*% t(sim.coef))
-    # simulations on zeta, and define theta
-    sim.zeta <- sim.theta <- simulations[, (length(coef) + 1):ncol(simulations),
-                                         drop = FALSE]
-    sim.zeta[, -1] <- exp(sim.theta[, -1])
-    sim.zeta <- t(apply(sim.zeta, 1, cumsum))
-
-    ##----- Expected value
-
-    k <- length(zeta) + 1
-    # remove (Intercept), make sure matrix is numeric
-    mat <- as.numeric(as.matrix(mm)[, -1])
-    eta <- t(mat %*% t(sim.coef))
-    rows <- as.matrix(mm)
-    Ipv <- cuts <- tmp0 <- array(0, dim = c(.self$num, k, nrow(rows)),
-                          dimnames = list(1:.self$num, lev, rownames(rows)))
-    for (i in 1:.self$num) {
-      cuts[i, , ] <- t(.self$linkinv(eta[i, ], sim.zeta[i, ]))
-    }
-    tmp0[, 2:k, ] <- cuts[, 2:k - 1, ] # 2:k-1 => 1, 2, 3, 4, ..., k-1
-    ev <- cuts - tmp0
-    dimnames(ev) <- list(1:.self$num, lev, rownames(mm))
-    # remove unnecessary dimensions
-    ev <- ev[, , 1]
-    colnames(ev) <- lev
-
-    ##----- Predicted value
-    pv <- matrix(NA, nrow = .self$num, ncol = nrow(as.matrix(mm)))
-    tmp <- matrix(runif(length(cuts[, 1, ]), 0, 1),
-                  nrow = .self$num,
-                  ncol = nrow(mm))
-    for (j in 1:k)
-      Ipv[, j, ] <- as.integer(tmp > cuts[, j, ])
-    for (j in 1:nrow(mm))
-      pv[, j] <- 1 + rowSums(Ipv[, , j, drop = FALSE])
-    factors <- factor(pv,
-                      labels = lev[1:length(lev) %in% sort(unique(pv))],
-                      ordered = TRUE)
-
-    return(list(ev = ev, pv = pv))
-  }
-)
diff --git a/R/model-ologit.R b/R/model-ologit.R
deleted file mode 100644
index 5f52443..0000000
--- a/R/model-ologit.R
+++ /dev/null
@@ -1,52 +0,0 @@
-#' Ordinal Logistic Regression for Ordered Categorical Dependent Variables
-#'
-#' Vignette: \url{http://docs.zeligproject.org/articles/zeligchoice_ologit.html}
-#' @import methods
-#' @export Zelig-ologit
-#' @exportClass Zelig-ologit
-#'
-#' @include model-obinchoice.R
-
-zologit <- setRefClass("Zelig-ologit",
-                       contains = "Zelig-obinchoice")
-
-zologit$methods(
-  initialize = function() {
-    callSuper()
-    .self$name <- "ologit"
-    .self$packageauthors <- "William N. Venables, and Brian D. Ripley"
-    .self$description <- "Ordinal Logit Regression for Ordered Categorical Dependent Variables"
-    .self$method <- "logistic"
-    .self$linkinv <- function(eta, zeta) {
-      tmp1 <- matrix(1, nrow = length(eta), ncol = length(zeta) + 1)
-      tmp1[, 1:length(zeta)] <- exp(zeta - eta) / (1 + exp(zeta - eta))
-      return(tmp1)
-    }
-    .self$wrapper <- "ologit"
-    .self$vignette.url <- "http://docs.zeligproject.org/articles/zeligchoice_ologit.html"
-  }
-)
-
-
-zologit$methods(
-  mcfun = function(x, b0 = 0, b1 = 1, ..., sim = TRUE){
-    mu <- b0 + b1 * x
-    n.sim = length(x)
-    y.star <- rlogis(n = n.sim, location = mu, scale = 1)  # latent continuous y
-    t <- c(0,1,2)  # vector of cutpoints dividing latent space into ordered outcomes
-
-    if(sim){
-      y.obs <- rep(1, n.sim)
-      for(i in 1:length(t)){
-        y.obs <- y.obs + as.numeric(y.star > t[i]) # observed ordered outcome
-      }
-      return(as.factor(y.obs))
-    }else{
-      y.obs.hat <- rep(1, n.sim)
-      for(i in 1:length(t)){
-        y.obs.hat <- y.obs.hat + plogis(q = t[i], location = mu , scale = 1, lower.tail = FALSE) # expectation of observed ordered outcome
-      }
-      return(y.obs.hat)
-    }
-  }
-)
diff --git a/R/model-oprobit-bayes.R b/R/model-oprobit-bayes.R
new file mode 100644
index 0000000..0326ba6
--- /dev/null
+++ b/R/model-oprobit-bayes.R
@@ -0,0 +1,154 @@
+#' Bayesian Ordered Probit Regression
+#' @param formula a symbolic representation of the model to be
+#'   estimated, in the form \code{y ~ x1 + x2}, where \code{y} is the
+#'   dependent variable and \code{x1} and \code{x2} are the explanatory
+#'   variables, and \code{y}, \code{x1}, and \code{x2} are contained in the
+#'   same dataset. (You may include more than two explanatory variables,
+#'   of course.) The \code{+} symbol means ``inclusion'' not
+#'   ``addition.'' You may also include interaction terms and main
+#'   effects in the form \code{x1*x2} without computing them in prior
+#'   steps; \code{I(x1*x2)} to include only the interaction term and
+#'   exclude the main effects; and quadratic terms in the form
+#'   \code{I(x1^2)}.
+#'@param model the name of a statistical model to estimate.
+#'   For a list of other supported models and their documentation see:
+#'   \url{http://docs.zeligproject.org/articles/}.
+#'@param data the name of a data frame containing the variables
+#'   referenced in the formula or a list of multiply imputed data frames
+#'   each having the same variable names and row numbers (created by
+#'   \code{Amelia} or \code{\link{to_zelig_mi}}).
+#'@param ... additional arguments passed to \code{zelig},
+#'   relevant for the model to be estimated.
+#'@param by a factor variable contained in \code{data}. If supplied,
+#'   \code{zelig} will subset
+#'   the data frame based on the levels in the \code{by} variable, and
+#'   estimate a model for each subset. This can save a considerable amount of
+#'   effort. You may also use \code{by} to run models using MatchIt
+#'   subclasses.
+#'@param cite If is set to 'TRUE' (default), the model citation will be printed
+#'   to the console.
+#'
+#'@details
+#' Additional parameters avaialable to many models include:
+#' \itemize{
+#'   \item \code{weights}: vector of weight values or a name of a variable in the dataset
+#'   by which to weight the model. For more information see:
+#'   \url{http://docs.zeligproject.org/articles/weights.html}.
+#'   \item \code{burnin}: number of the initial MCMC iterations to be discarded (defaults to 1,000).
+#'   \item \code{mcmc}: number of the MCMC iterations after burnin (defaults to 10,000).
+#'   \item \code{thin}: thinning interval for the Markov chain. Only every thin-th draw from
+#'   the Markov chain is kept. The value of mcmc must be divisible by this value. The default
+#'   value is 1.
+#'   \item \code{verbose}: defaults to FALSE. If TRUE, the progress of the sampler (every 10\%)
+#'   is printed to the screen.
+#'   \item \code{seed}: seed for the random number generator. The default is \code{NA} which
+#'   corresponds to a random seed of 12345.
+#'   \item \code{beta.start}: starting values for the Markov chain, either a scalar or vector
+#'   with length equal to the number of estimated coefficients. The default is \code{NA}, such
+#'   that the maximum likelihood estimates are used as the starting values.
+#' }
+#' Use the following parameters to specify the model's priors:
+#' \itemize{
+#'     \item \code{b0}: prior mean for the coefficients, either a numeric vector or a
+#'     scalar. If a scalar value, that value will be the prior mean for all the
+#'     coefficients. The default is 0.
+#'     \item \code{B0}: prior precision parameter for the coefficients, either a
+#'     square matrix (with the dimensions equal to the number of the coefficients) or
+#'     a scalar. If a scalar value, that value times an identity matrix will be the
+#'     prior precision parameter. The default is 0, which leads to an improper prior.
+#' }
+#' @return Depending on the class of model selected, \code{zelig} will return
+#'   an object with elements including \code{coefficients}, \code{residuals},
+#'   and \code{formula} which may be summarized using
+#'   \code{summary(z.out)} or individually extracted using, for example,
+#'   \code{coef(z.out)}. See
+#'   \url{http://docs.zeligproject.org/articles/getters.html} for a list of
+#'   functions to extract model components. You can also extract whole fitted
+#'   model objects using \code{\link{from_zelig_model}}.
+#'
+#' Vignette: \url{http://docs.zeligproject.org/articles/zelig_oprobitbayes.html}
+#' @import methods
+#' @export Zelig-oprobit-bayes
+#' @exportClass Zelig-oprobit-bayes
+#'
+#' @include model-zelig.R
+#' @include model-bayes.R
+
+zoprobitbayes <- setRefClass("Zelig-oprobit-bayes",
+                            contains = c("Zelig-bayes"))
+
+zoprobitbayes$methods(
+  initialize = function() {
+    callSuper()
+    .self$name <- "oprobit-bayes"
+    .self$year <- 2013
+    .self$category <- "discrete"
+    .self$authors <- "Ben Goodrich, Ying Lu"
+    .self$description = "Bayesian Probit Regression for Dichotomous Dependent Variables"
+    .self$fn <- quote(MCMCpack::MCMCoprobit)
+    # JSON from parent
+    .self$wrapper <- "oprobit.bayes"
+  }
+)
+
+zoprobitbayes$methods(
+  param = function(z.out) {
+    mysimparam <- callSuper(z.out)
+    # Produce the model matrix in order to get all terms (explicit and implicit)
+    # from the regression model.
+    mat <- model.matrix(.self$formula, data = .self$data)
+    # Response Terms
+    p <- ncol(mat)
+    # All coefficients
+    coefficients <- mysimparam
+    # Coefficients for predictor variables
+    beta <- coefficients[, 1:p]
+    # Middle values of "gamma" matrix
+    mid.gamma <- coefficients[, -(1:p)]
+    # ...
+    level <- ncol(coefficients) - p + 2
+    # Initialize the "gamma" parameters
+    gamma <- matrix(NA, nrow(coefficients), level + 1)
+    # The first, second and last values are fixed
+    gamma[, 1] <- -Inf
+    gamma[, 2] <- 0
+    gamma[, ncol(gamma)] <- Inf
+    # All others are determined by the coef-matrix (now stored in mid.gamma)
+    if (ncol(gamma) > 3)
+      gamma[, 3:(ncol(gamma) - 1)] <- mid.gamma
+    # return
+    mysimparam <- list(simparam = beta, simalpha = gamma)
+    return(mysimparam)
+  }
+)
+
+zoprobitbayes$methods(
+  qi = function(simparam, mm) {
+    beta <- simparam$simparam
+    gamma <- simparam$simalpha
+    labels <- levels(model.response(model.frame(.self$formula, data = .self$data)))
+    # x is implicitly cast into a matrix
+    eta <- beta %*% t(mm)
+    # **TODO: Sort out sizes of matrices for these things.
+    ev <- array(NA, c(nrow(eta), ncol(gamma) - 1, ncol(eta)))
+    pv <- matrix(NA, nrow(eta), ncol(eta))
+    # Compute Expected Values
+    # ***********************
+    # Note that the inverse link function is:
+    #   pnorm(gamma[, j+1]-eta) - pnorm(gamma[, j]-eta)
+    for (j in 1:(ncol(gamma) - 1)) {
+      ev[, j, ] <- pnorm(gamma[, j + 1] - eta) - pnorm(gamma[, j] - eta)
+    }
+    colnames(ev) <- labels
+    # Compute Predicted Values
+    # ************************
+    for (j in 1:nrow(pv)) {
+      mu <- eta[j, ]
+      pv[j, ] <- as.character(cut(mu, gamma[j, ], labels = labels))
+    }
+    pv <- as.factor(pv)
+    # **TODO: Update summarize to work with at most 3-dimensional arrays
+    ev <- ev[, , 1]
+    return(list(ev = ev, pv = pv))
+  }
+)
diff --git a/R/model-oprobit.R b/R/model-oprobit.R
deleted file mode 100644
index ef07f7e..0000000
--- a/R/model-oprobit.R
+++ /dev/null
@@ -1,53 +0,0 @@
-#' Ordinal Probit Regression for Ordered Categorical Dependent Variables
-#'
-#' Vignette: \url{http://docs.zeligproject.org/articles/zeligchoice_oprobit.html}
-#' @import methods
-#' @export Zelig-ologit
-#' @exportClass Zelig-ologit
-#' 
-#' @include model-obinchoice.R
-
-zoprobit <- setRefClass("Zelig-oprobit",
-                       contains = "Zelig-obinchoice")
-
-zoprobit$methods(
-  initialize = function() {
-    callSuper()
-    .self$name <- "oprobit"
-    .self$packageauthors <- "William N. Venables, and Brian D. Ripley"
-    .self$description <- "Ordinal Probit Regression for Ordered Categorical Dependent Variables"
-    .self$method <- "probit"
-    .self$linkinv <- function(eta, zeta) {
-      tmp1 <- matrix(1, nrow = length(eta), ncol = length(zeta) + 1)
-      tmp1[, 1:length(zeta)] <- pnorm(zeta - eta)
-      return(tmp1)
-    }
-    .self$wrapper <- "oprobit"
-    .self$vignette.url <- "http://docs.zeligproject.org/articles/zeligchoice_oprobit.html"
-  }
-)
-
-
-zoprobit$methods(
-  mcfun = function(x, b0=0, b1=1, ..., sim=TRUE){
-    mu <- b0 + b1 * x
-    n.sim = length(x)
-    y.star <- rnorm(n = n.sim, mean = mu, sd = 1)  # latent continuous y
-    t <- c(0,1,2)  # vector of cutpoints dividing latent space into ordered outcomes
-    
-    if(sim){
-        y.obs <- rep(1, n.sim)
-        for(i in 1:length(t)){
-            y.obs <- y.obs + as.numeric(y.star > t[i]) # observed ordered outcome
-        }
-        return(as.factor(y.obs))
-    }else{
-        y.obs.hat <- rep(1, n.sim)
-        for(i in 1:length(t)){
-            y.obs.hat <- y.obs.hat + pnorm(q = t[i], mean = mu , sd = 1, lower.tail = FALSE) # expectation of observed ordered outcome
-        }
-        return(y.obs.hat)
-    }
-  }
-)
-
diff --git a/R/model-poisson-bayes.R b/R/model-poisson-bayes.R
new file mode 100644
index 0000000..e594cc6
--- /dev/null
+++ b/R/model-poisson-bayes.R
@@ -0,0 +1,117 @@
+#' Bayesian Poisson Regression
+#'
+#' @param formula a symbolic representation of the model to be
+#'   estimated, in the form \code{y ~ x1 + x2}, where \code{y} is the
+#'   dependent variable and \code{x1} and \code{x2} are the explanatory
+#'   variables, and \code{y}, \code{x1}, and \code{x2} are contained in the
+#'   same dataset. (You may include more than two explanatory variables,
+#'   of course.) The \code{+} symbol means ``inclusion'' not
+#'   ``addition.'' You may also include interaction terms and main
+#'   effects in the form \code{x1*x2} without computing them in prior
+#'   steps; \code{I(x1*x2)} to include only the interaction term and
+#'   exclude the main effects; and quadratic terms in the form
+#'   \code{I(x1^2)}.
+#' @param model the name of a statistical model to estimate.
+#'   For a list of other supported models and their documentation see:
+#'   \url{http://docs.zeligproject.org/articles/}.
+#' @param data the name of a data frame containing the variables
+#'   referenced in the formula or a list of multiply imputed data frames
+#'   each having the same variable names and row numbers (created by
+#'   \code{Amelia} or \code{\link{to_zelig_mi}}).
+#' @param ... additional arguments passed to \code{zelig},
+#'   relevant for the model to be estimated.
+#' @param by a factor variable contained in \code{data}. If supplied,
+#'   \code{zelig} will subset
+#'   the data frame based on the levels in the \code{by} variable, and
+#'   estimate a model for each subset. This can save a considerable amount of
+#'   effort. You may also use \code{by} to run models using MatchIt
+#'   subclasses.
+#' @param cite If is set to 'TRUE' (default), the model citation will be printed
+#'   to the console.
+#' @examples
+#' data(sanction)
+#' z.out <- zelig(num ~ target + coop, model = "poisson.bayes",data = sanction, verbose = FALSE)
+#'
+#' @details
+#' Additional parameters avaialable to this model include:
+#' \itemize{
+#'   \item \code{weights}: vector of weight values or a name of a variable in the dataset
+#'   by which to weight the model. For more information see:
+#'   \url{http://docs.zeligproject.org/articles/weights.html}.
+#'   \item \code{burnin}: number of the initial MCMC iterations to be discarded (defaults to 1,000).
+#'   \item \code{mcmc}: number of the MCMC iterations after burnin (defaults to 10,000).
+#'   \item \code{tune}: Metropolis tuning parameter, either a positive scalar or a vector of length
+#'   kk, where kk is the number of coefficients. The tuning parameter should be set such that the
+#'   acceptance rate of the Metropolis algorithm is satisfactory (typically between 0.20 and 0.5).
+#'   The default value is 1.1.
+#'   \item \code{thin}: thinning interval for the Markov chain. Only every thin-th draw from the
+#'    Markov chain is kept. The value of mcmc must be divisible by this value. The default value is 1.
+#'   \item \code{verbose}: defaults to FALSE. If TRUE, the progress of the sampler (every 10\%) is
+#'   printed to the screen.
+#'   \item \code{seed}: seed for the random number generator. The default is \code{NA} which
+#'   corresponds to a random seed of 12345.
+#'   \item \code{beta.start}: starting values for the Markov chain, either a scalar or vector
+#'   with length equal to the number of estimated coefficients. The default is \code{NA}, such that the maximum likelihood estimates are used as the starting values.
+#' }
+#' Use the following parameters to specify the model's priors:
+#' \itemize{
+#'     \item \code{b0}: prior mean for the coefficients, either a numeric vector or a scalar.
+#'     If a scalar value, that value will be the prior mean for all the coefficients.
+#'     The default is 0.
+#'     \item \code{B0}: prior precision parameter for the coefficients, either a square matrix
+#'     (with the dimensions equal to the number of the coefficients) or a scalar.
+#'     If a scalar value, that value times an identity matrix will be the prior precision parameter.
+#'     The default is 0, which leads to an improper prior.
+#' }
+#' @return Depending on the class of model selected, \code{zelig} will return
+#'   an object with elements including \code{coefficients}, \code{residuals},
+#'   and \code{formula} which may be summarized using
+#'   \code{summary(z.out)} or individually extracted using, for example,
+#'   \code{coef(z.out)}. See
+#'   \url{http://docs.zeligproject.org/articles/getters.html} for a list of
+#'   functions to extract model components. You can also extract whole fitted
+#'   model objects using \code{\link{from_zelig_model}}.
+#'
+#'
+#' @seealso Vignette: \url{http://docs.zeligproject.org/articles/zelig_poissonbayes.html}
+#' @import methods
+#' @export Zelig-poisson-bayes
+#' @exportClass Zelig-poisson-bayes
+#'
+#' @include model-zelig.R
+#' @include model-bayes.R
+#' @include model-poisson.R
+
+zpoissonbayes <- setRefClass("Zelig-poisson-bayes",
+                             contains = c("Zelig-bayes",
+                                          "Zelig-poisson"))
+
+zpoissonbayes$methods(
+  initialize = function() {
+    callSuper()
+    .self$name <- "poisson-bayes"
+    .self$family <- "poisson"
+    .self$link <- "log"
+    .self$linkinv <- eval(call(.self$family, .self$link))$linkinv
+    .self$year <- 2013
+    .self$category <- "continuous"
+    .self$authors <- "Ben Goodrich, Ying Lu"
+    .self$description = "Bayesian Poisson Regression"
+    .self$fn <- quote(MCMCpack::MCMCpoisson)
+    # JSON from parent
+    .self$wrapper <- "poisson.bayes"
+  }
+)
+
+
+zpoissonbayes$methods(
+  mcfun = function(x, b0=0, b1=1, ..., sim=TRUE){
+    lambda <- exp(b0 + b1 * x)
+    if(sim){
+        y <- rpois(n=length(x), lambda=lambda)
+        return(y)
+    }else{
+        return(lambda)
+    }
+  }
+)
diff --git a/R/model-poisson-gee.R b/R/model-poisson-gee.R
new file mode 100755
index 0000000..2f652f9
--- /dev/null
+++ b/R/model-poisson-gee.R
@@ -0,0 +1,101 @@
+#' Generalized Estimating Equation for Poisson Regression
+#' @param formula a symbolic representation of the model to be
+#'   estimated, in the form \code{y ~ x1 + x2}, where \code{y} is the
+#'   dependent variable and \code{x1} and \code{x2} are the explanatory
+#'   variables, and \code{y}, \code{x1}, and \code{x2} are contained in the
+#'   same dataset. (You may include more than two explanatory variables,
+#'   of course.) The \code{+} symbol means ``inclusion'' not
+#'   ``addition.'' You may also include interaction terms and main
+#'   effects in the form \code{x1*x2} without computing them in prior
+#'   steps; \code{I(x1*x2)} to include only the interaction term and
+#'   exclude the main effects; and quadratic terms in the form
+#'   \code{I(x1^2)}.
+#'@param model the name of a statistical model to estimate.
+#'   For a list of other supported models and their documentation see:
+#'   \url{http://docs.zeligproject.org/articles/}.
+#'@param data the name of a data frame containing the variables
+#'   referenced in the formula or a list of multiply imputed data frames
+#'   each having the same variable names and row numbers (created by
+#'   \code{Amelia} or \code{\link{to_zelig_mi}}).
+#'@param ... additional arguments passed to \code{zelig},
+#'   relevant for the model to be estimated.
+#'@param by a factor variable contained in \code{data}. If supplied,
+#'   \code{zelig} will subset
+#'   the data frame based on the levels in the \code{by} variable, and
+#'   estimate a model for each subset. This can save a considerable amount of
+#'   effort. You may also use \code{by} to run models using MatchIt
+#'   subclasses.
+#'@param cite If is set to 'TRUE' (default), the model citation will be printed
+#'   to the console.
+#'@param id: where id is a variable which identifies the clusters. The data should
+#'be sorted by id and should be ordered within each cluster when appropriate
+#'@param corstr: character string specifying the correlation structure: "independence",
+#'"exchangeable", "ar1", "unstructured" and "userdefined"
+#'@param geeglm: See geeglm in package geepack for other function arguments
+#'
+#' @details
+#' Additional parameters avaialable to this model include:
+#' \itemize{
+#'   \item \code{weights}: vector of weight values or a name of a variable in the dataset
+#'   by which to weight the model. For more information see:
+#'   \url{http://docs.zeligproject.org/articles/weights.html}.
+#'   \item \code{bootstrap}: logical or numeric. If \code{FALSE} don't use bootstraps to
+#'   robustly estimate uncertainty around model parameters due to sampling error.
+#'   If an integer is supplied, the number of boostraps to run.
+#'   For more information see:
+#'   \url{http://docs.zeligproject.org/articles/bootstraps.html}.
+#' }
+#' @return Depending on the class of model selected, \code{zelig} will return
+#'   an object with elements including \code{coefficients}, \code{residuals},
+#'   and \code{formula} which may be summarized using
+#'   \code{summary(z.out)} or individually extracted using, for example,
+#'   \code{coef(z.out)}. See
+#'   \url{http://docs.zeligproject.org/articles/getters.html} for a list of
+#'   functions to extract model components. You can also extract whole fitted
+#'   model objects using \code{\link{from_zelig_model}}.
+#'
+#'
+#'@examples
+#' library(Zelig)
+#' data(sanction)
+#' sanction$cluster <- c(rep(c(1:15), 5), rep(c(16), 3))
+#' sorted.sanction <- sanction[order(sanction$cluster),]
+#' z.out <- zelig(num ~ target + coop, model = "poisson.gee",id = "cluster", data = sorted.sanction)
+#' summary(z.out)
+#'
+#'@seealso Vignette: \url{http://docs.zeligproject.org/articles/zelig_poissongee.html}
+#' @import methods
+#' @export Zelig-poisson-gee
+#' @exportClass Zelig-poisson-gee
+#'
+#' @include model-zelig.R
+#' @include model-gee.R
+#' @include model-poisson.R
+
+zpoissongee <- setRefClass("Zelig-poisson-gee",
+                           contains = c("Zelig-gee", "Zelig-poisson"))
+
+zpoissongee$methods(
+  initialize = function() {
+    callSuper()
+    .self$name <- "poisson-gee"
+    .self$family <- "poisson"
+    .self$link <- "log"
+    .self$linkinv <- eval(call(.self$family, .self$link))$linkinv
+    .self$year <- 2011
+    .self$category <- "continuous"
+    .self$authors <- "Patrick Lam"
+    .self$description = "General Estimating Equation for Poisson Regression"
+    .self$fn <- quote(geepack::geeglm)
+    # JSON from parent
+    .self$wrapper <- "poisson.gee"
+  }
+)
+
+
+zpoissongee$methods(
+  param = function(z.out, method="mvn") {
+    simparam.local <- callSuper(z.out, method=method)
+    return(simparam.local$simparam) # no ancillary parameter
+  }
+)
diff --git a/R/model-poisson-survey.R b/R/model-poisson-survey.R
new file mode 100755
index 0000000..fe6357b
--- /dev/null
+++ b/R/model-poisson-survey.R
@@ -0,0 +1,106 @@
+#' Poisson Regression with Survey Weights
+#' @param formula a symbolic representation of the model to be
+#'   estimated, in the form \code{y ~ x1 + x2}, where \code{y} is the
+#'   dependent variable and \code{x1} and \code{x2} are the explanatory
+#'   variables, and \code{y}, \code{x1}, and \code{x2} are contained in the
+#'   same dataset. (You may include more than two explanatory variables,
+#'   of course.) The \code{+} symbol means ``inclusion'' not
+#'   ``addition.'' You may also include interaction terms and main
+#'   effects in the form \code{x1*x2} without computing them in prior
+#'   steps; \code{I(x1*x2)} to include only the interaction term and
+#'   exclude the main effects; and quadratic terms in the form
+#'   \code{I(x1^2)}.
+#'@param model the name of a statistical model to estimate.
+#'   For a list of other supported models and their documentation see:
+#'   \url{http://docs.zeligproject.org/articles/}.
+#'@param data the name of a data frame containing the variables
+#'   referenced in the formula or a list of multiply imputed data frames
+#'   each having the same variable names and row numbers (created by
+#'   \code{Amelia} or \code{\link{to_zelig_mi}}).
+#'@param ... additional arguments passed to \code{zelig},
+#'   relevant for the model to be estimated.
+#'@param by a factor variable contained in \code{data}. If supplied,
+#'   \code{zelig} will subset
+#'   the data frame based on the levels in the \code{by} variable, and
+#'   estimate a model for each subset. This can save a considerable amount of
+#'   effort. You may also use \code{by} to run models using MatchIt
+#'   subclasses.
+#'@param cite If is set to 'TRUE' (default), the model citation will be printed
+#'   to the console.
+#'
+#' @details
+#' Additional parameters avaialable to this model include:
+#' \itemize{
+#'   \item \code{weights}: vector of weight values or a name of a variable in the dataset
+#'   by which to weight the model. For more information see:
+#'   \url{http://docs.zeligproject.org/articles/weights.html}.
+#'   \item \code{bootstrap}: logical or numeric. If \code{FALSE} don't use bootstraps to
+#'   robustly estimate uncertainty around model parameters due to sampling error.
+#'   If an integer is supplied, the number of boostraps to run.
+#'   For more information see:
+#'   \url{http://docs.zeligproject.org/articles/bootstraps.html}.
+#' }
+#' @return Depending on the class of model selected, \code{zelig} will return
+#'   an object with elements including \code{coefficients}, \code{residuals},
+#'   and \code{formula} which may be summarized using
+#'   \code{summary(z.out)} or individually extracted using, for example,
+#'   \code{coef(z.out)}. See
+#'   \url{http://docs.zeligproject.org/articles/getters.html} for a list of
+#'   functions to extract model components. You can also extract whole fitted
+#'   model objects using \code{\link{from_zelig_model}}.
+#'
+#' @examples
+#' library(Zelig)
+#' data(api, package="survey")
+#' z.out1 <- zelig(enroll ~ api99 + yr.rnd , model = "poisson.survey", data = apistrat)
+#' summary(z.out1)
+#'
+#' @seealso Vignette: \url{http://docs.zeligproject.org/articles/zelig_poissonsurvey.html}
+#' @import methods
+#' @export Zelig-poisson-gee
+#' @exportClass Zelig-poisson-gee
+#'
+#' @include model-zelig.R
+#' @include model-survey.R
+#' @include model-poisson.R
+
+zpoissonsurvey <- setRefClass("Zelig-poisson-survey",
+                           contains = c("Zelig-survey", "Zelig-poisson"))
+
+zpoissonsurvey$methods(
+  initialize = function() {
+    callSuper()
+    .self$name <- "poisson-survey"
+    .self$family <- "poisson"
+    .self$link <- "log"
+    .self$linkinv <- eval(call(.self$family, .self$link))$linkinv
+    .self$category <- "continuous"
+    .self$description = "Poisson Regression with Survey Weights"
+    # JSON from parent
+    .self$wrapper <- "poisson.survey"
+  }
+)
+
+zpoissonsurvey$methods(
+  qi = function(simparam, mm) {
+    eta <- simparam %*% t(mm)
+    theta.local <- matrix(.self$linkinv(eta), nrow = nrow(simparam))
+    ev <- theta.local
+    pv <- matrix(NA, nrow = nrow(theta.local), ncol = ncol(theta.local))
+    for (i in 1:ncol(theta.local))
+      pv[, i] <- rpois(nrow(theta.local), lambda = theta.local[, i])
+    return(list(ev = ev, pv = pv))
+  }
+)
+
+zpoissonsurvey$methods(
+  mcfun = function(x, b0=0, b1=1, ..., sim=TRUE){
+    lambda <- exp(b0 + b1 * x)
+    if(sim){
+        y <- rpois(n=length(x), lambda=lambda)
+        return(y)
+    }else{
+        return(lambda)
+    }
+  }
+)
diff --git a/R/model-poisson.R b/R/model-poisson.R
new file mode 100755
index 0000000..0959a0a
--- /dev/null
+++ b/R/model-poisson.R
@@ -0,0 +1,111 @@
+#' Poisson Regression for Event Count Dependent Variables
+#'@param formula a symbolic representation of the model to be
+#'   estimated, in the form \code{y ~ x1 + x2}, where \code{y} is the
+#'   dependent variable and \code{x1} and \code{x2} are the explanatory
+#'   variables, and \code{y}, \code{x1}, and \code{x2} are contained in the
+#'   same dataset. (You may include more than two explanatory variables,
+#'   of course.) The \code{+} symbol means ``inclusion'' not
+#'   ``addition.'' You may also include interaction terms and main
+#'   effects in the form \code{x1*x2} without computing them in prior
+#'   steps; \code{I(x1*x2)} to include only the interaction term and
+#'   exclude the main effects; and quadratic terms in the form
+#'   \code{I(x1^2)}.
+#'@param model the name of a statistical model to estimate.
+#'   For a list of other supported models and their documentation see:
+#'   \url{http://docs.zeligproject.org/articles/}.
+#'@param data the name of a data frame containing the variables
+#'   referenced in the formula or a list of multiply imputed data frames
+#'   each having the same variable names and row numbers (created by
+#'   \code{Amelia} or \code{\link{to_zelig_mi}}).
+#'@param ... additional arguments passed to \code{zelig},
+#'   relevant for the model to be estimated.
+#'@param by a factor variable contained in \code{data}. If supplied,
+#'   \code{zelig} will subset
+#'   the data frame based on the levels in the \code{by} variable, and
+#'   estimate a model for each subset. This can save a considerable amount of
+#'   effort. You may also use \code{by} to run models using MatchIt
+#'   subclasses.
+#'@param cite If is set to 'TRUE' (default), the model citation will be printed
+#'   to the console.
+#'@param id: where id is a variable which identifies the clusters. The data should be sorted by id and should be ordered within each cluster when appropriate
+#'@param corstr: character string specifying the correlation structure: "independence", "exchangeable", "ar1", "unstructured" and "userdefined"
+#'@param geeglm: See geeglm in package geepack for other function arguments
+#'
+#' @details
+#' Additional parameters avaialable to this model include:
+#' \itemize{
+#'   \item \code{weights}: vector of weight values or a name of a variable in the dataset
+#'   by which to weight the model. For more information see:
+#'   \url{http://docs.zeligproject.org/articles/weights.html}.
+#'   \item \code{bootstrap}: logical or numeric. If \code{FALSE} don't use bootstraps to
+#'   robustly estimate uncertainty around model parameters due to sampling error.
+#'   If an integer is supplied, the number of boostraps to run.
+#'   For more information see:
+#'   \url{http://docs.zeligproject.org/articles/bootstraps.html}.
+#' }
+#' @return Depending on the class of model selected, \code{zelig} will return
+#'   an object with elements including \code{coefficients}, \code{residuals},
+#'   and \code{formula} which may be summarized using
+#'   \code{summary(z.out)} or individually extracted using, for example,
+#'   \code{coef(z.out)}. See
+#'   \url{http://docs.zeligproject.org/articles/getters.html} for a list of
+#'   functions to extract model components. You can also extract whole fitted
+#'   model objects using \code{\link{from_zelig_model}}.
+#'
+#' @examples
+#' library(Zelig)
+#' data(sanction)
+#' z.out <- zelig(num ~ target + coop, model = "poisson", data = sanction)
+#' summary(z.out)
+#'
+#' @seealso Vignette: \url{http://docs.zeligproject.org/articles/zelig_poisson.html}
+#' @import methods
+#' @export Zelig-poisson
+#' @exportClass Zelig-poisson
+#'
+#' @include model-zelig.R
+#' @include model-glm.R
+zpoisson <- setRefClass("Zelig-poisson",
+                        contains = "Zelig-glm",
+                        fields = list(theta = "ANY"))
+
+zpoisson$methods(
+  initialize = function() {
+    callSuper()
+    .self$name <- "poisson"
+    .self$family <- "poisson"
+    .self$link <- "log"
+    .self$linkinv <- eval(call(.self$family, .self$link))$linkinv
+    .self$authors <- "Kosuke Imai, Gary King, Olivia Lau"
+    .self$year <- 2007
+    .self$category <- "count"
+    .self$description <- "Poisson Regression for Event Count Dependent Variables"
+    # JSON
+    .self$outcome <- "discrete"
+    .self$wrapper <- "poisson"
+  }
+)
+
+zpoisson$methods(
+  qi = function(simparam, mm) {
+    eta <- simparam %*% t(mm)
+    theta.local <- matrix(.self$linkinv(eta), nrow = nrow(simparam))
+    ev <- theta.local
+    pv <- matrix(NA, nrow = nrow(theta.local), ncol = ncol(theta.local))
+    for (i in 1:ncol(theta.local))
+      pv[, i] <- rpois(nrow(theta.local), lambda = theta.local[, i])
+    return(list(ev = ev, pv = pv))
+  }
+)
+
+zpoisson$methods(
+  mcfun = function(x, b0=0, b1=1, ..., sim=TRUE){
+    lambda <- exp(b0 + b1 * x)
+    if(sim){
+        y <- rpois(n=length(x), lambda=lambda)
+        return(y)
+    }else{
+        return(lambda)
+    }
+  }
+)
diff --git a/R/model-probit-bayes.R b/R/model-probit-bayes.R
new file mode 100644
index 0000000..3bf9d1c
--- /dev/null
+++ b/R/model-probit-bayes.R
@@ -0,0 +1,121 @@
+#' Bayesian Probit Regression
+#'
+#' @param formula a symbolic representation of the model to be
+#'   estimated, in the form \code{y ~ x1 + x2}, where \code{y} is the
+#'   dependent variable and \code{x1} and \code{x2} are the explanatory
+#'   variables, and \code{y}, \code{x1}, and \code{x2} are contained in the
+#'   same dataset. (You may include more than two explanatory variables,
+#'   of course.) The \code{+} symbol means ``inclusion'' not
+#'   ``addition.'' You may also include interaction terms and main
+#'   effects in the form \code{x1*x2} without computing them in prior
+#'   steps; \code{I(x1*x2)} to include only the interaction term and
+#'   exclude the main effects; and quadratic terms in the form
+#'   \code{I(x1^2)}.
+#' @param model the name of a statistical model to estimate.
+#'   For a list of supported models and their documentation see:
+#'   \url{http://docs.zeligproject.org/articles/}.
+#' @param data the name of a data frame containing the variables
+#'   referenced in the formula or a list of multiply imputed data frames
+#'   each having the same variable names and row numbers (created by
+#'   \code{Amelia} or \code{\link{to_zelig_mi}}).
+#' @param ... additional arguments passed to \code{zelig},
+#'   relevant for the model to be estimated.
+#' @param by a factor variable contained in \code{data}. If supplied,
+#'   \code{zelig} will subset
+#'   the data frame based on the levels in the \code{by} variable, and
+#'   estimate a model for each subset. This can save a considerable amount of
+#'   effort. For example, to run the same model on all fifty states, you could
+#'   use: \code{z.out <- zelig(y ~ x1 + x2, data = mydata, model = 'ls',
+#'   by = 'state')} You may also use \code{by} to run models using MatchIt
+#'   subclasses.
+#' @param cite If is set to 'TRUE' (default), the model citation will be printed
+#'   to the console.
+#'
+#' @details
+#' Additional parameters avaialable to this model include:
+#' \itemize{
+#'   \item \code{weights}: vector of weight values or a name of a variable in the dataset
+#'   by which to weight the model. For more information see:
+#'   \url{http://docs.zeligproject.org/articles/weights.html}.
+#'   \item \code{burnin}: number of the initial MCMC iterations to be discarded (defaults to 1,000).
+#'   \item \code{mcmc}: number of the MCMC iterations after burnin (defaults to 10,000).
+#'   \item \code{thin}: thinning interval for the Markov chain. Only every thin-th draw from the
+#'   Markov chain is kept. The value of mcmc must be divisible by this value. The default value is 1.
+#'   \item \code{verbose}: defaults to FALSE. If TRUE, the progress of the sampler (every 10\%) is
+#'   printed to the screen.
+#'   \item \code{seed}: seed for the random number generator. The default is \code{NA} which
+#'   corresponds to a random seed of 12345.
+#'   \item \code{beta.start}: starting values for the Markov chain, either a scalar or vector with
+#'   length equal to the number of estimated coefficients. The default is \code{NA}, such that the
+#'   maximum likelihood estimates are used as the starting values.
+#' }
+#' Use the following parameters to specify the model's priors:
+#' \itemize{
+#'     \item \code{b0}: prior mean for the coefficients, either a numeric vector or a scalar.
+#'     If a scalar value, that value will be the prior mean for all the coefficients. The default is 0.
+#'     \item \code{B0}: prior precision parameter for the coefficients, either a square matrix (with
+#'     the dimensions equal to the number of the coefficients) or a scalar. If a scalar value, that
+#'     value times an identity matrix will be the prior precision parameter. The default is 0, which
+#'     leads to an improper prior.
+#' }
+#' Use the following arguments to specify optional output for the model:
+#' \itemize{
+#'     \item \code{bayes.resid}: defaults to FALSE. If TRUE, the latent Bayesian residuals for all
+#'     observations are returned. Alternatively, users can specify a vector of observations for
+#'     which the latent residuals should be returned.
+#' }
+#' @return Depending on the class of model selected, \code{zelig} will return
+#'   an object with elements including \code{coefficients}, \code{residuals},
+#'   and \code{formula} which may be summarized using
+#'   \code{summary(z.out)} or individually extracted using, for example,
+#'   \code{coef(z.out)}. See
+#'   \url{http://docs.zeligproject.org/articles/getters.html} for a list of
+#'   functions to extract model components. You can also extract whole fitted
+#'   model objects using \code{\link{from_zelig_model}}.
+#'
+#'
+#' @examples
+#' data(turnout)
+#' z.out <- zelig(vote ~ race + educate, model = "probit.bayes",data = turnout, verbose = FALSE)
+#' summary(z.out)
+#'
+#' @seealso Vignette: \url{http://docs.zeligproject.org/articles/zelig_probitbayes.html}
+#' @import methods
+#' @export Zelig-probit-bayes
+#' @exportClass Zelig-probit-bayes
+#'
+#' @include model-zelig.R
+#' @include model-probit.R
+
+zprobitbayes <- setRefClass("Zelig-probit-bayes",
+                             contains = c("Zelig-bayes",
+                                          "Zelig-probit"))
+
+zprobitbayes$methods(
+  initialize = function() {
+    callSuper()
+    .self$name <- "probit-bayes"
+    .self$family <- "binomial"
+    .self$link <- "probit"
+    .self$linkinv <- eval(call(.self$family, .self$link))$linkinv
+    .self$year <- 2013
+    .self$category <- "dichotomous"
+    .self$authors <- "Ben Goodrich, Ying Lu"
+    .self$description = "Bayesian Probit Regression for Dichotomous Dependent Variables"
+    .self$fn <- quote(MCMCpack::MCMCprobit)
+    # JSON from parent
+    .self$wrapper <- "probit.bayes"
+  }
+)
+
+zprobitbayes$methods(
+  mcfun = function(x, b0=0, b1=1, ..., sim=TRUE){
+    mu <- pnorm(b0 + b1 * x)
+    if(sim){
+        y <- rbinom(n=length(x), size=1, prob=mu)
+        return(y)
+    }else{
+        return(mu)
+    }
+  }
+)
diff --git a/R/model-probit-gee.R b/R/model-probit-gee.R
new file mode 100755
index 0000000..d5fe21d
--- /dev/null
+++ b/R/model-probit-gee.R
@@ -0,0 +1,87 @@
+#' Generalized Estimating Equation for Probit Regression
+#' @param formula a symbolic representation of the model to be
+#'   estimated, in the form \code{y ~ x1 + x2}, where \code{y} is the
+#'   dependent variable and \code{x1} and \code{x2} are the explanatory
+#'   variables, and \code{y}, \code{x1}, and \code{x2} are contained in the
+#'   same dataset. (You may include more than two explanatory variables,
+#'   of course.) The \code{+} symbol means ``inclusion'' not
+#'   ``addition.'' You may also include interaction terms and main
+#'   effects in the form \code{x1*x2} without computing them in prior
+#'   steps; \code{I(x1*x2)} to include only the interaction term and
+#'   exclude the main effects; and quadratic terms in the form
+#'   \code{I(x1^2)}.
+#' @param model the name of a statistical model to estimate.
+#'   For a list of other supported models and their documentation see:
+#'   \url{http://docs.zeligproject.org/articles/}.
+#' @param data the name of a data frame containing the variables
+#'   referenced in the formula or a list of multiply imputed data frames
+#'   each having the same variable names and row numbers (created by
+#'   \code{Amelia} or \code{\link{to_zelig_mi}}).
+#' @param ... additional arguments passed to \code{zelig},
+#'   relevant for the model to be estimated.
+#' @param by a factor variable contained in \code{data}. If supplied,
+#'   \code{zelig} will subset
+#'   the data frame based on the levels in the \code{by} variable, and
+#'   estimate a model for each subset. This can save a considerable amount of
+#'   effort. You may also use \code{by} to run models using MatchIt
+#'   subclasses.
+#' @param cite If is set to 'TRUE' (default), the model citation will be printed
+#'   to the console.
+#' @param corstr:character string specifying the correlation structure: "independence",
+#' "exchangeable", "ar1", "unstructured" and "userdefined"
+#' @param See geeglm in package geepack for other function arguments.
+#' @param id: where id is a variable which identifies the clusters. The data should be
+#' sorted by id and should be ordered within each cluster when appropriate
+#' @param corstr: character string specifying the correlation structure: "independence",
+#' "exchangeable", "ar1", "unstructured" and "userdefined"
+#' @param geeglm: See geeglm in package geepack for other function arguments
+#'
+#' @details
+#' Additional parameters avaialable to this model include:
+#' \itemize{
+#'   \item \code{weights}: vector of weight values or a name of a variable in the dataset
+#'   by which to weight the model. For more information see:
+#'   \url{http://docs.zeligproject.org/articles/weights.html}.
+#'   \item \code{bootstrap}: logical or numeric. If \code{FALSE} don't use bootstraps to
+#'   robustly estimate uncertainty around model parameters due to sampling error.
+#'   If an integer is supplied, the number of boostraps to run.
+#'   For more information see:
+#'   \url{http://docs.zeligproject.org/articles/bootstraps.html}.
+#' }
+#' @return Depending on the class of model selected, \code{zelig} will return
+#'   an object with elements including \code{coefficients}, \code{residuals},
+#'   and \code{formula} which may be summarized using
+#'   \code{summary(z.out)} or individually extracted using, for example,
+#'   \code{coef(z.out)}. See
+#'   \url{http://docs.zeligproject.org/articles/getters.html} for a list of
+#'   functions to extract model components. You can also extract whole fitted
+#'   model objects using \code{\link{from_zelig_model}}.
+#'
+#'@examples
+#' data(turnout)
+#' turnout$cluster <- rep(c(1:200), 10)
+#' sorted.turnout <- turnout[order(turnout$cluster),]
+#' z.out1 <- zelig(vote ~ race + educate, model = "probit.gee",
+#' id = "cluster", data = sorted.turnout)
+#' summary(z.out1)
+#'
+#' @seealso Vignette: \url{http://docs.zeligproject.org/articles/zelig_probitgee.html}
+#' @import methods
+#' @export Zelig-probit-gee
+#' @exportClass Zelig-probit-gee
+#'
+#' @include model-zelig.R
+#' @include model-binchoice-gee.R
+
+zprobitgee <- setRefClass("Zelig-probit-gee",
+                          contains = c("Zelig-binchoice-gee"))
+
+zprobitgee$methods(
+  initialize = function() {
+    callSuper()
+    .self$name <- "probit-gee"
+    .self$link <- "probit"
+    .self$description <- "General Estimating Equation for Probit Regression"
+    .self$wrapper <- "probit.gee"
+  }
+)
diff --git a/R/model-probit-survey.R b/R/model-probit-survey.R
new file mode 100755
index 0000000..691afdf
--- /dev/null
+++ b/R/model-probit-survey.R
@@ -0,0 +1,121 @@
+#' Probit Regression with Survey Weights
+#'
+#'  @param formula a symbolic representation of the model to be
+#'   estimated, in the form \code{y ~ x1 + x2}, where \code{y} is the
+#'   dependent variable and \code{x1} and \code{x2} are the explanatory
+#'   variables, and \code{y}, \code{x1}, and \code{x2} are contained in the
+#'   same dataset. (You may include more than two explanatory variables,
+#'   of course.) The \code{+} symbol means ``inclusion'' not
+#'   ``addition.'' You may also include interaction terms and main
+#'   effects in the form \code{x1*x2} without computing them in prior
+#'   steps; \code{I(x1*x2)} to include only the interaction term and
+#'   exclude the main effects; and quadratic terms in the form
+#'   \code{I(x1^2)}.
+#' @param model the name of a statistical model to estimate.
+#'   For a list of other supported models and their documentation see:
+#'   \url{http://docs.zeligproject.org/articles/}.
+#' @param data the name of a data frame containing the variables
+#'   referenced in the formula or a list of multiply imputed data frames
+#'   each having the same variable names and row numbers (created by
+#'   \code{Amelia} or \code{\link{to_zelig_mi}}).
+#' @param ... additional arguments passed to \code{zelig},
+#'   relevant for the model to be estimated.
+#' @param by a factor variable contained in \code{data}. If supplied,
+#'   \code{zelig} will subset
+#'   the data frame based on the levels in the \code{by} variable, and
+#'   estimate a model for each subset. This can save a considerable amount of
+#'   effort. You may also use \code{by} to run models using MatchIt
+#'   subclasses.
+#' @param cite If is set to 'TRUE' (default), the model citation will be printed
+#'   to the console.
+#' @param below: point at which the dependent variable is censored from below.
+#'     If the dependent variable is only censored from above, set \code{below = -Inf}.
+#'     The default value is 0.
+#' @param above: point at which the dependent variable is censored from above.
+#'      If the dependent variable is only censored from below, set \code{above = Inf}.
+#'      The default value is \code{Inf}.
+#' @details
+#' Additional parameters avaialable to this model include:
+#' \itemize{
+#'   \item weights: vector of weight values or a name of a variable in the dataset
+#'   by which to weight the model. For more information see:
+#'   \url{http://docs.zeligproject.org/articles/weights.html}.
+#'   \item burnin: number of the initial MCMC iterations to be discarded (defaults to 1,000).
+#'   \item mcmc: number of the MCMC iterations after burnin (defaults to 10,000).
+#'   \item thin: thinning interval for the Markov chain. Only every thin-th
+#'   draw from the Markov chain is kept. The value of mcmc must be divisible by this value.
+#'   The default value is 1.
+#'   \item verbose: defaults to FALSE. If TRUE, the progress of the sampler (every 10\%)
+#'   is printed to the screen.
+#'   \item seed: seed for the random number generator. The default is \code{NA} which
+#'   corresponds to a random seed of 12345.
+#'   \item beta.start: starting values for the Markov chain, either a scalar or
+#'   vector with length equal to the number of estimated coefficients. The default is
+#'   \code{NA}, such that the maximum likelihood estimates are used as the starting values.
+#' }
+#' Use the following parameters to specify the model's priors:
+#' \itemize{
+#'     \item b0: prior mean for the coefficients, either a numeric vector or a scalar.
+#'     If a scalar value, that value will be the prior mean for all the coefficients.
+#'     The default is 0.
+#'     \item B0: prior precision parameter for the coefficients, either a square matrix
+#'     (with the dimensions equal to the number of the coefficients) or a scalar.
+#'     If a scalar value, that value times an identity matrix will be the prior precision parameter.
+#'     The default is 0, which leads to an improper prior.
+#'     \item c0: c0/2 is the shape parameter for the Inverse Gamma prior on the variance of the
+#'     disturbance terms.
+#'     \item d0: d0/2 is the scale parameter for the Inverse Gamma prior on the variance of the
+#'     disturbance terms.
+#' }
+#' @return Depending on the class of model selected, \code{zelig} will return
+#'   an object with elements including \code{coefficients}, \code{residuals},
+#'   and \code{formula} which may be summarized using
+#'   \code{summary(z.out)} or individually extracted using, for example,
+#'   \code{coef(z.out)}. See
+#'   \url{http://docs.zeligproject.org/articles/getters.html} for a list of
+#'   functions to extract model components. You can also extract whole fitted
+#'   model objects using \code{\link{from_zelig_model}}.
+#'
+#'@examples
+#' data(api, package="survey")
+#' z.out1 <- zelig(enroll ~ api99 + yr.rnd ,
+#' model = "poisson.survey", data = apistrat)
+#' summary(z.out1)
+#' x.low <- setx(z.out1, api99= quantile(apistrat$api99, 0.2))
+#' x.high <- setx(z.out1, api99= quantile(apistrat$api99, 0.8))
+#' s.out1 <- sim(z.out1, x=x.low, x1=x.high)
+#' summary(s.out1)
+#' plot(s.out1)
+#'
+#' @seealso Vignette: \url{http://docs.zeligproject.org/articles/zelig_probitsurvey.html}
+#' @import methods
+#' @export Zelig-probit-survey
+#' @exportClass Zelig-probit-survey
+#'
+#' @include model-zelig.R
+#' @include model-binchoice-survey.R
+
+zprobitsurvey <- setRefClass("Zelig-probit-survey",
+                          contains = c("Zelig-binchoice-survey"))
+
+zprobitsurvey$methods(
+  initialize = function() {
+    callSuper()
+    .self$name <- "probit-survey"
+    .self$link <- "probit"
+    .self$description <- "Probit Regression with Survey Weights"
+    .self$wrapper <- "probit.survey"
+  }
+)
+
+zprobitsurvey$methods(
+  mcfun = function(x, b0=0, b1=1, ..., sim=TRUE){
+    mu <- pnorm(b0 + b1 * x)
+    if(sim){
+        y <- rbinom(n=length(x), size=1, prob=mu)
+        return(y)
+    }else{
+        return(mu)
+    }
+  }
+)
diff --git a/R/model-probit.R b/R/model-probit.R
new file mode 100755
index 0000000..fc896fe
--- /dev/null
+++ b/R/model-probit.R
@@ -0,0 +1,85 @@
+#' Probit Regression for Dichotomous Dependent Variables
+#' @param formula a symbolic representation of the model to be
+#'   estimated, in the form \code{y ~ x1 + x2}, where \code{y} is the
+#'   dependent variable and \code{x1} and \code{x2} are the explanatory
+#'   variables, and \code{y}, \code{x1}, and \code{x2} are contained in the
+#'   same dataset. (You may include more than two explanatory variables,
+#'   of course.) The \code{+} symbol means ``inclusion'' not
+#'   ``addition.'' You may also include interaction terms and main
+#'   effects in the form \code{x1*x2} without computing them in prior
+#'   steps; \code{I(x1*x2)} to include only the interaction term and
+#'   exclude the main effects; and quadratic terms in the form
+#'   \code{I(x1^2)}.
+#' @param model the name of a statistical model to estimate.
+#'   For a list of other supported models and their documentation see:
+#'   \url{http://docs.zeligproject.org/articles/}.
+#' @param data the name of a data frame containing the variables
+#'   referenced in the formula or a list of multiply imputed data frames
+#'   each having the same variable names and row numbers (created by
+#'   \code{Amelia} or \code{\link{to_zelig_mi}}).
+#' @param ... additional arguments passed to \code{zelig},
+#'   relevant for the model to be estimated.
+#' @param by a factor variable contained in \code{data}. If supplied,
+#'   \code{zelig} will subset
+#'   the data frame based on the levels in the \code{by} variable, and
+#'   estimate a model for each subset. This can save a considerable amount of
+#'   effort. You may also use \code{by} to run models using MatchIt
+#'   subclasses.
+#' @param cite If is set to 'TRUE' (default), the model citation will be printed
+#'   to the console.
+#' @examples
+#' data(turnout)
+#' z.out <- zelig(vote ~ race + educate, model = "probit", data = turnout)
+#' summary(z.out)
+#' x.out <- setx(z.out)
+#' s.out <- sim(z.out, x = x.out)
+#' summary(s.out)
+#' plot(s.out)
+#'
+#' @details
+#' Additional parameters avaialable to this model include:
+#' \itemize{
+#'   \item \code{weights}: vector of weight values or a name of a variable in the dataset
+#'   by which to weight the model. For more information see:
+#'   \url{http://docs.zeligproject.org/articles/weights.html}.
+#'   \item \code{bootstrap}: logical or numeric. If \code{FALSE} don't use bootstraps to
+#'   robustly estimate uncertainty around model parameters due to sampling error.
+#'   If an integer is supplied, the number of boostraps to run.
+#'   For more information see:
+#'   \url{http://docs.zeligproject.org/articles/bootstraps.html}.
+#' }
+#'
+#' @seealso Vignette: \url{http://docs.zeligproject.org/articles/zelig_probit.html}
+#' @import methods
+#' @export Zelig-probit
+#' @exportClass Zelig-probit
+#'
+#' @include model-zelig.R
+#' @include model-glm.R
+#' @include model-binchoice.R
+
+zprobit <- setRefClass("Zelig-probit",
+                       contains = "Zelig-binchoice")
+
+zprobit$methods(
+  initialize = function() {
+    callSuper()
+    .self$name <- "probit"
+    .self$link <- "probit"
+    .self$description = "Probit Regression for Dichotomous Dependent Variables"
+    .self$packageauthors <- "R Core Team"
+    .self$wrapper <- "probit"
+  }
+)
+
+zprobit$methods(
+  mcfun = function(x, b0=0, b1=1, ..., sim=TRUE){
+    mu <- pnorm(b0 + b1 * x)
+    if(sim){
+        y <- rbinom(n=length(x), size=1, prob=mu)
+        return(y)
+    }else{
+        return(mu)
+    }
+  }
+)
diff --git a/R/model-quantile.R b/R/model-quantile.R
new file mode 100755
index 0000000..7ceb0d6
--- /dev/null
+++ b/R/model-quantile.R
@@ -0,0 +1,199 @@
+#' Quantile Regression for Continuous Dependent Variables
+#'@param formula a symbolic representation of the model to be
+#'   estimated, in the form \code{y ~ x1 + x2}, where \code{y} is the
+#'   dependent variable and \code{x1} and \code{x2} are the explanatory
+#'   variables, and \code{y}, \code{x1}, and \code{x2} are contained in the
+#'   same dataset. (You may include more than two explanatory variables,
+#'   of course.) The \code{+} symbol means ``inclusion'' not
+#'   ``addition.'' You may also include interaction terms and main
+#'   effects in the form \code{x1*x2} without computing them in prior
+#'   steps; \code{I(x1*x2)} to include only the interaction term and
+#'   exclude the main effects; and quadratic terms in the form
+#'   \code{I(x1^2)}.
+#'@param model the name of a statistical model to estimate.
+#'   For a list of other supported models and their documentation see:
+#'   \url{http://docs.zeligproject.org/articles/}.
+#'@param data the name of a data frame containing the variables
+#'   referenced in the formula or a list of multiply imputed data frames
+#'   each having the same variable names and row numbers (created by
+#'   \code{Amelia} or \code{\link{to_zelig_mi}}).
+#'@param ... additional arguments passed to \code{zelig},
+#'   relevant for the model to be estimated.
+#'@param by a factor variable contained in \code{data}. If supplied,
+#'   \code{zelig} will subset
+#'   the data frame based on the levels in the \code{by} variable, and
+#'   estimate a model for each subset. This can save a considerable amount of
+#'   effort. You may also use \code{by} to run models using MatchIt
+#'   subclasses.
+#'@param cite If is set to 'TRUE' (default), the model citation will be printed
+#'   to the console.
+#'
+#' @details
+#' In addition to the standard inputs, \code{zelig} takes the following additional options
+#' for quantile regression:
+#' \itemize{
+#'     \item \code{tau}: defaults to 0.5. Specifies the conditional quantile(s) that will be
+#'     estimated. 0.5 corresponds to estimating the conditional median, 0.25 and 0.75 correspond
+#'     to the conditional quartiles, etc. tau vectors with length greater than 1 are not currently
+#'     supported. If tau is set outside of the interval [0,1], zelig returns the solution for all
+#'     possible conditional quantiles given the data, but does not support inference on this fit
+#'     (setx and sim will fail).
+#'     \item \code{se}: a string value that defaults to "nid". Specifies the method by which
+#'     the covariance matrix of coefficients is estimated during the sim stage of analysis. \code{se}
+#'     can take the following values, which are passed to the \code{summary.rq} function from the
+#'     \code{quantreg} package. These descriptions are copied from the \code{summary.rq} documentation.
+#'     \itemize{
+#'         \item \code{"iid"} which presumes that the errors are iid and computes an estimate of
+#'         the asymptotic covariance matrix as in KB(1978).
+#'         \item \code{"nid"} which presumes local (in tau) linearity (in x) of the the
+#'         conditional quantile functions and computes a Huber sandwich estimate using a local
+#'         estimate of the sparsity.
+#'         \item \code{"ker"} which uses a kernel estimate of the sandwich as proposed by Powell(1990).
+#'     }
+#'     \item \code{...}: additional options passed to rq when fitting the model. See documentation for rq in the quantreg package for more information.
+#' }
+#' Additional parameters avaialable to this model include:
+#' \itemize{
+#'   \item \code{weights}: vector of weight values or a name of a variable in the dataset
+#'   by which to weight the model. For more information see:
+#'   \url{http://docs.zeligproject.org/articles/weights.html}.
+#'   \item \code{bootstrap}: logical or numeric. If \code{FALSE} don't use bootstraps to
+#'   robustly estimate uncertainty around model parameters due to sampling error.
+#'   If an integer is supplied, the number of boostraps to run.
+#'   For more information see:
+#'   \url{http://docs.zeligproject.org/articles/bootstraps.html}.
+#' }
+
+#'
+#' @return Depending on the class of model selected, \code{zelig} will return
+#'   an object with elements including \code{coefficients}, \code{residuals},
+#'   and \code{formula} which may be summarized using
+#'   \code{summary(z.out)} or individually extracted using, for example,
+#'   \code{coef(z.out)}. See
+#'   \url{http://docs.zeligproject.org/articles/getters.html} for a list of
+#'   functions to extract model components. You can also extract whole fitted
+#'   model objects using \code{\link{from_zelig_model}}.
+#'
+#' @examples
+#' library(Zelig)
+#' data(stackloss)
+#' z.out1 <- zelig(stack.loss ~ Air.Flow + Water.Temp + Acid.Conc.,
+#' model = "rq", data = stackloss,tau = 0.5)
+#' summary(z.out1)
+#'
+#' @seealso Vignette: \url{http://docs.zeligproject.org/articles/zelig_quantile.html}
+#' @import methods
+#' @export Zelig-quantile
+#' @exportClass Zelig-quantile
+#'
+#' @include model-zelig.R
+
+zquantile <- setRefClass("Zelig-quantile",
+                         contains = "Zelig",
+                         field = list(tau = "ANY"
+                         ))
+
+zquantile$methods(
+  initialize = function() {
+    callSuper()
+    .self$fn <- quote(quantreg::rq)
+    .self$name <- "quantile"
+    .self$authors <- "Alexander D'Amour"
+    .self$packageauthors <- "Roger Koenker"
+    .self$modelauthors <- "Alexander D'Amour"
+    .self$year <- 2008
+    .self$category <- "continuous"
+    .self$description <- "Quantile Regression for Continuous Dependent Variables"
+    # JSON
+    .self$outcome <- "continuous"
+    .self$wrapper <- "rq"
+    .self$acceptweights <- TRUE
+  }
+)
+
+zquantile$methods(
+  zelig = function(formula, data, ..., weights = NULL, by = NULL,
+                   bootstrap = FALSE) {
+
+    # avoids CRAN warning about deep assignment from formula existing separately as argument and field
+    localBy <- by
+    # avoids CRAN warning about deep assignment from formula existing separately as argument and field
+    localData <- data
+    .self$zelig.call <- match.call(expand.dots = TRUE)
+    .self$model.call <- match.call(expand.dots = TRUE)
+
+    if (!is.null(.self$model.call$tau)) {
+        if (length(eval(.self$model.call$tau)) > 1) {
+            stop('tau argument only accepts 1 value.\nZelig is using only the first value.',
+                    call. = FALSE)
+        } else
+            .self$tau <- eval(.self$model.call$tau)
+#        if (length(.self$tau) > 1) {
+#            localData <- bind_rows(lapply(eval(.self$tau),
+#                                      function(tau) cbind(tau, localData)))
+#          #  localBy <- cbind("tau", localBy)
+#        }
+    } else
+        .self$tau <- 0.5
+
+    callSuper(formula = formula, data = localData, ..., weights = weights,
+                by = localBy, bootstrap = bootstrap)
+
+    rq_summaries <- lapply(.self$zelig.out$z.out, (function(x)
+                            summary(x, se = "nid", cov = TRUE)))
+
+    if (length(rq_summaries) > 1) {
+        rse <- lapply(rq_summaries, function(y) y$cov)
+    }
+    else rse <- rq_summaries$cov
+#    rse <- lapply(.self$zelig.out$z.out, (function(x)
+#        quantreg::summary.rq(x, se = "nid", cov = TRUE)$cov))
+
+#    rse <- lapply(.self$zelig.out$z.out,
+#        (function(x) {
+#            full <- quantreg::summary.rq(x, se = "nid", cov = TRUE)$cov
+#        })
+#    )
+    .self$test.statistics<- list(robust.se = rse)
+})
+
+zquantile$methods(
+  param = function(z.out, method = "mvn") {
+    object <- z.out
+    if(identical(method,"mvn")){
+        rq.sum <- summary(object, cov = TRUE, se = object$se)
+        return(mvrnorm(n = .self$num, mu = object$coef, Sigma = rq.sum$cov))
+    } else if(identical(method,"point")){
+        return(t(as.matrix(object$coef)))
+    }
+})
+
+zquantile$methods(
+  qi = function(simparam, mm) {
+    object <- mm
+    coeff <- simparam
+    eps <- .Machine$double.eps^(2/3)
+    ev <- coeff %*% t(object)
+    pv <- ev
+    n <- nrow(.self$data)
+    h <- bandwidth.rq(.self$tau, n) # estimate optimal bandwidth for sparsity
+    if (.self$tau + h > 1)
+      stop("tau + h > 1. Sparsity estimate failed. Please specify a tau closer to 0.5")
+    if (.self$tau - h < 0)
+      stop("tau - h < 0. Sparsity estimate failed. Please specify a tau closer to 0.5")
+    beta_high <- rq(.self$formula, data = .self$data, tau = .self$tau + h )$coef
+    beta_low <- rq(.self$formula, data = .self$data, tau = .self$tau - h)$coef
+    F_diff <- mm %*% (beta_high - beta_low)
+    if (any(F_diff <= 0))
+      warning(paste(sum(F_diff <= 0),
+                    "density estimates were non-positive. Predicted values will likely be non-sensical."))
+    # Includes machine error correction as per summary.rq for nid case
+    f <- pmax(0, (2 * h) / (F_diff - eps))
+    # Use asymptotic approximation of Q(tau|X,beta) distribution
+    for(ii in 1:nrow(ev))
+      # Asymptotic distribution as per Koenker 2005 _Quantile Regression_ p. 72
+      pv[ii, ] <- rnorm(length(ev[ii, ]), mean = ev[ii, ],
+                        sqrt((.self$tau * (1 - .self$tau))) / (f * sqrt(n)))
+    return(list(ev  = ev, pv = pv))
+  }
+)
diff --git a/R/model-relogit.R b/R/model-relogit.R
new file mode 100755
index 0000000..36d486d
--- /dev/null
+++ b/R/model-relogit.R
@@ -0,0 +1,358 @@
+#' Rare Events Logistic Regression for Dichotomous Dependent Variables
+#'@param formula a symbolic representation of the model to be
+#'   estimated, in the form \code{y ~ x1 + x2}, where \code{y} is the
+#'   dependent variable and \code{x1} and \code{x2} are the explanatory
+#'   variables, and \code{y}, \code{x1}, and \code{x2} are contained in the
+#'   same dataset. (You may include more than two explanatory variables,
+#'   of course.) The \code{+} symbol means ``inclusion'' not
+#'   ``addition.'' You may also include interaction terms and main
+#'   effects in the form \code{x1*x2} without computing them in prior
+#'   steps; \code{I(x1*x2)} to include only the interaction term and
+#'   exclude the main effects; and quadratic terms in the form
+#'   \code{I(x1^2)}.
+#'@param model the name of a statistical model to estimate.
+#'   For a list of other supported models and their documentation see:
+#'   \url{http://docs.zeligproject.org/articles/}.
+#'@param data the name of a data frame containing the variables
+#'   referenced in the formula or a list of multiply imputed data frames
+#'   each having the same variable names and row numbers (created by
+#'   \code{Amelia} or \code{\link{to_zelig_mi}}).
+#'@param ... additional arguments passed to \code{zelig},
+#'   relevant for the model to be estimated.
+#'@param by a factor variable contained in \code{data}. If supplied,
+#'   \code{zelig} will subset
+#'   the data frame based on the levels in the \code{by} variable, and
+#'   estimate a model for each subset. This can save a considerable amount of
+#'   effort. You may also use \code{by} to run models using MatchIt
+#'   subclasses.
+#'@param cite If is set to 'TRUE' (default), the model citation will be printed
+#'   to the console.
+#'
+#' @details
+#' The relogit procedure supports four optional arguments in addition to the
+#' standard arguments for zelig(). You may additionally use:
+#' \itemize{
+#'     \item \code{tau}: a vector containing either one or two values for \code{tau},
+#'     the true population fraction of ones. Use, for example, tau = c(0.05, 0.1) to specify
+#'     that the lower bound on tau is 0.05 and the upper bound is 0.1. If left unspecified, only
+#'     finite-sample bias correction is performed, not case-control correction.
+#'     \item \code{case.control}: if tau is specified, choose a method to correct for case-control
+#'     sampling design: "prior" (default) or "weighting".
+#'     \item \code{bias.correct}: a logical value of \code{TRUE} (default) or \code{FALSE}
+#'     indicating whether the intercept should be corrected for finite sample (rare events) bias.
+#' }
+#'
+#' Additional parameters avaialable to many models include:
+#' \itemize{
+#'   \item \code{weights}: vector of weight values or a name of a variable in the dataset
+#'   by which to weight the model. For more information see:
+#'   \url{http://docs.zeligproject.org/articles/weights.html}.
+#'   \item \code{bootstrap}: logical or numeric. If \code{FALSE} don't use bootstraps to
+#'   robustly estimate uncertainty around model parameters due to sampling error.
+#'   If an integer is supplied, the number of boostraps to run.
+#'   For more information see:
+#'   \url{http://docs.zeligproject.org/articles/bootstraps.html}.
+#' }
+#' @return Depending on the class of model selected, \code{zelig} will return
+#'   an object with elements including \code{coefficients}, \code{residuals},
+#'   and \code{formula} which may be summarized using
+#'   \code{summary(z.out)} or individually extracted using, for example,
+#'   \code{coef(z.out)}. See
+#'   \url{http://docs.zeligproject.org/articles/getters.html} for a list of
+#'   functions to extract model components. You can also extract whole fitted
+#'   model objects using \code{\link{from_zelig_model}}.
+#'
+#' @examples
+#' library(Zelig)
+#' data(mid)
+#' z.out1 <- zelig(conflict ~ major + contig + power + maxdem + mindem + years,
+#'               data = mid, model = "relogit", tau = 1042/303772)
+#' summary(z.out1)
+#'
+#' @seealso Vignette: \url{http://docs.zeligproject.org/articles/zelig_relogit.html}
+#' @import methods
+#' @export Zelig-relogit
+#' @exportClass Zelig-relogit
+#'
+#' @include model-zelig.R
+#' @include model-glm.R
+#' @include model-binchoice.R
+#' @include model-logit.R
+
+zrelogit <- setRefClass("Zelig-relogit",
+                      contains = "Zelig",
+                      fields = list(family = "character",
+                                    link = "character",
+                                    linkinv = "function"))
+
+zrelogit$methods(
+  initialize = function() {
+    callSuper()
+    .self$name <- "relogit"
+    .self$description <- "Rare Events Logistic Regression for Dichotomous Dependent Variables"
+    .self$fn <- quote(relogit)
+    .self$family <- "binomial"
+    .self$link <- "logit"
+    .self$wrapper <- "relogit"
+    ref1 <- bibentry(
+            bibtype="Article",
+            title = "Logistic Regression in Rare Events Data",
+            author = c(
+                person("Gary", "King"),
+                person("Langche", "Zeng")
+                ),
+            journal = "Political Analysis",
+            volume = 9,
+            number = 2,
+            year = 2001,
+            pages = "137--163")
+    ref2 <- bibentry(
+            bibtype="Article",
+            title = "Explaining Rare Events in International Relations",
+            author = c(
+                person("Gary", "King"),
+                person("Langche", "Zeng")
+                ),
+            journal = "International Organization",
+            volume = 55,
+            number = 3,
+            year = 2001,
+            pages = "693--715")
+    .self$refs<-c(.self$refs,ref1,ref2)
+  }
+)
+
+zrelogit$methods(
+    show = function(odds_ratios = FALSE, ...) {
+    if (.self$robust.se) {
+        if (!.self$mi & !.self$bootstrap) {
+            # Replace standard errors with robust standard errors
+            cat("Model: \n")
+            f5 <- .self$copy()
+            obj <- f5$from_zelig_model()
+            summ <- summary(obj)
+            robust_model <- lmtest::coeftest(obj,
+                                vcov = sandwich::vcovHC(obj, "HC1"))
+            summ$coefficients[, c(2:4)] <- robust_model[, c(2:4)]
+            if (odds_ratios) {
+                summ <- or_summary(summ, label_mod_se = "(OR, robust)")
+            }
+            else
+                colnames(summ$coefficients)[2] <-
+                    paste(colnames(summ$coefficients)[2], "(robust)")
+            print(summ)
+        }
+        else if (.self$mi || .self$bootstrap)
+            stop("Weighted case control correction results are not currently available for multiply imputed or bootstrapped data.",
+                call. = FALSE)
+    }
+    else if (!.self$robust.se & odds_ratios & !.self$mi & !.self$bootstrap) {
+        cat("Model: \n")
+        f5 <- .self$copy()
+        obj <- f5$from_zelig_model()
+        summ <- summary(obj)
+        summ <- or_summary(summ)
+        print(summ)
+    }
+    else {
+        callSuper(...)
+    }
+        #print(base::summary(.self$zelig.out))
+    }
+)
+
+zrelogit$methods(
+  zelig = function(formula, ..., tau = NULL, bias.correct = NULL,
+                   case.control = NULL, data, by = NULL, bootstrap = FALSE) {
+     if (!is.null(tau)) {
+         if (any(tau <= 0))
+             stop("tau is the population proportion of 1's for the response variable.\nIt must be > 0.",
+                  call. = FALSE)
+     }
+    .self$zelig.call <- match.call(expand.dots = TRUE)
+    .self$model.call <- .self$zelig.call
+    # Catch NULL case.control
+    if (is.null(case.control))
+        case.control <- "prior"
+    if (case.control == "weighting") # See GitHub issue #295
+        .self$robust.se <- TRUE
+    else if (length(.self$robust.se) == 0)
+        .self$robust.se <- FALSE
+    # Catch NULL bias.correct
+    if (is.null(bias.correct))
+        bias.correct = TRUE
+    # Construct formula. Relogit models have the structure:
+    #   cbind(y, 1-y) ~ x1 + x2 + x3 + ... + xN
+    # Where y is the response.
+#    form <- update(formula, cbind(., 1 - .) ~ .)
+#    .self$model.call$formula <- form
+    .self$model.call$case.control <- case.control
+    .self$model.call$bias.correct <- bias.correct
+    .self$model.call$tau <- tau
+    callSuper(formula = formula, data = data, ..., weights = NULL, by = by,
+              bootstrap = bootstrap)
+  }
+)
+
+zrelogit$methods(
+    modcall_formula_transformer = function() {
+        "Transform model call formula."
+
+        # Construct formula. Relogit models have the structure:
+        #   cbind(y, 1-y) ~ x1 + x2 + x3 + ... + xN
+        # Where y is the response.
+        relogit_form <- update(.self$formula, cbind(., 1 - .) ~ .)
+        .self$model.call$formula <- relogit_form
+    }
+)
+
+zrelogit$methods(
+  qi = function(simparam, mm) {
+    .self$linkinv <- eval(call(.self$family, .self$link))$linkinv
+    coeff <- simparam
+    eta <- simparam %*% t(mm)
+    eta <- Filter(function (y) !is.na(y), eta)
+    theta <- matrix(.self$linkinv(eta), nrow = nrow(coeff))
+    ev <- matrix(.self$linkinv(eta), ncol = ncol(theta))
+    pv <- matrix(nrow = nrow(ev), ncol = ncol(ev))
+    for (j in 1:ncol(ev))
+      pv[, j] <- rbinom(length(ev[, j]), 1, prob = ev[, j])
+    levels(pv) <- c(0, 1)
+    return(list(ev = ev, pv = pv))
+  }
+)
+
+
+#' Estimation function for rare events logit models
+#'
+#' @details This is intended as an internal function. Regular users should
+#' use \code{zelig} with \code{model = "relogit"}.
+#'
+#' @keywords internal
+
+relogit <- function(formula,
+                    data = sys.parent(),
+                    tau = NULL,
+                    bias.correct = TRUE,
+                    case.control = "prior",
+                    ...){
+  mf <- match.call()
+  mf$tau <- mf$bias.correct <- mf$case.control <- NULL
+  if (!is.null(tau)) {
+    tau <- unique(tau)
+    if (length(case.control) > 1)
+      stop("You can only choose one option for case control correction.")
+    ck1 <- grep("p", case.control)
+    ck2 <- grep("w", case.control)
+    if (length(ck1) == 0 & length(ck2) == 0)
+      stop("choose either case.control = \"prior\" ",
+           "or case.control = \"weighting\"")
+    if (length(ck2) == 0)
+      weighting <- FALSE
+    else
+      weighting <- TRUE
+  }
+  else
+    weighting <- FALSE
+  if (length(tau) >= 2) {
+    stop("tau must be a vector of length less than or equal to 1. For multiple taus, estimate models individually.")
+#  else if (length(tau) == 2) {
+
+# The following is not currently supported due to issue with summary
+#    mf[[1]] <- relogit
+#    res <- list()
+#    mf$tau <- min(tau)
+#    res$lower.estimate <- eval(as.call(mf), parent.frame())
+#    mf$tau <- max(tau)
+#    res$upper.estimate <- eval(as.call(mf), parent.frame())
+#    res$formula <- formula
+#    class(res) <- c("Relogit2", "Relogit")
+#    return(res)
+  }
+  else {
+    mf[[1]] <- glm
+    mf$family <- binomial(link = "logit")
+
+    y2 <- model.response(model.frame(mf$formula, data))
+    if (is.matrix(y2))
+      y <- y2[,1]
+    else
+      y <- y2
+    ybar <- mean(y)
+    if (weighting) {
+      w1 <- tau / ybar
+      w0 <- (1-tau) / (1-ybar)
+      wi <- w1 * y + w0 * (1 - y)
+      mf$weights <- wi
+    }
+    res <- eval(as.call(mf), parent.frame())
+    res$call <- match.call(expand.dots = TRUE)
+    res$tau <- tau
+    X <- model.matrix(res)
+    ## bias correction
+    if (bias.correct){
+      pihat <- fitted(res)
+      if (is.null(tau)) # w_i = 1
+        wi <- rep(1, length(y))
+      else if (weighting)
+        res$weighting <- TRUE
+      else {
+        w1 <- tau/ybar
+        w0 <- (1 - tau) / (1 - ybar)
+        wi <- w1 * y + w0 * (1 - y)
+        res$weighting <- FALSE
+      }
+      W <- pihat * (1 - pihat) * wi
+      ##Qdiag <- diag(X%*%solve(t(X)%*%diag(W)%*%X)%*%t(X))
+      Qdiag <- lm.influence(lm(y ~ X - 1, weights = W), do.coef = FALSE)$hat / W
+      if (is.null(tau)) # w_1=1 since tau=ybar
+        xi <- 0.5 * Qdiag * (2 * pihat - 1)
+      else
+        xi <- 0.5 * Qdiag * ((1 + w1) * pihat - w1) # returns ISQ (2001, eq. 11)
+        ## xi <- 0.5 * Qdiag * ((1 + w0) * pihat - w0)
+      res$coefficients <- res$coefficients -
+        lm(xi ~ X - 1, weights = W)$coefficients
+      res$bias.correct <- TRUE
+    }
+    else
+      res$bias.correct <- FALSE
+    ## prior correction
+    if (!is.null(tau) & !weighting){
+      if (tau <= 0 || tau >= 1)
+        stop("\ntau needs to be between 0 and 1.\n")
+      res$coefficients["(Intercept)"] <- res$coefficients["(Intercept)"] -
+        log(((1 - tau) / tau) * (ybar / (1 - ybar)))
+      res$prior.correct <- TRUE
+      res$weighting <- FALSE
+    }
+    else
+      res$prior.correct <- FALSE
+    if (is.null(res$weighting))
+      res$weighting <- FALSE
+
+    res$linear.predictors <- t(res$coefficients) %*% t(X)
+    res$fitted.values <- 1 / (1 + exp(-res$linear.predictors))
+    res$zelig <- "Relogit"
+    class(res) <- c("Relogit", "glm", "lm")
+    return(res)
+  }
+}
+
+zrelogit$methods(mcfun = function(x, b0 = 0, b1 = 1, alpha, mc.seed=123, keepall=FALSE, ..., sim = TRUE) {
+    set.seed(mc.seed)
+    mu <- 1/(1 + exp(-b0 - b1 * x))
+
+    y <- rbinom(n = length(x), size = 1, prob = mu)
+    if(keepall){
+      flag <- rep(TRUE, length(x))
+    }else{
+      select <- runif(length(x)) <alpha
+      flag <- ((y==0) & (select)) | (y==1)
+    }
+
+    if (sim) {
+        return(data.frame(y.sim=y[flag], x.sim=x[flag]))
+    } else {
+        return(data.frame(y.hat=mu[flag], x.seq=x[flag]))
+    }
+})
diff --git a/R/model-survey.R b/R/model-survey.R
new file mode 100755
index 0000000..63edb12
--- /dev/null
+++ b/R/model-survey.R
@@ -0,0 +1,102 @@
+#' Survey models in Zelig for weights for complex sampling designs
+#'
+#' @import methods
+#' @export Zelig-survey
+#' @exportClass Zelig-survey
+#'
+#' @include model-zelig.R
+zsurvey <- setRefClass("Zelig-survey", contains = "Zelig")
+
+zsurvey$methods(initialize = function() {
+    callSuper()
+    .self$fn <- quote(survey::svyglm)
+    .self$packageauthors <- "Thomas Lumley"
+    .self$modelauthors <- "Nicholas Carnes"
+    .self$acceptweights <- TRUE
+})
+
+zsurvey$methods(zelig = function(formula, data, ids = ~1, probs = NULL,
+                                strata = NULL, fpc = NULL, nest = FALSE,
+                                check.strata = !nest, repweights = NULL,
+                                type = NULL, combined.weights = FALSE,
+                                rho = NULL, bootstrap.average = NULL,
+                                scale = NULL, rscales = NULL,
+                                fpctype = "fraction", ..., weights = NULL,
+                                by = NULL, bootstrap = FALSE) {
+    .self$zelig.call <- match.call(expand.dots = TRUE)
+
+    warning("Not all features are available in Zelig Survey.\nConsider using surveyglm and setx directly.\nFor details see: <http://docs.zeligproject.org/articles/to_zelig.html>.",
+            call. = FALSE)
+
+    recastString2Formula <- function(a) {
+        if (is.character(a)) {
+            a <- as.Formula(paste("~", a))
+        }
+        return(a)
+    }
+
+    extract_vector <- function(x, df = data) {
+        if ("formula" %in% class(x))
+            x <- as.character(x)[[2]]
+        if (is.character(x))
+            if (x %in% names(df))
+                x <- df[, x]
+        return(x)
+    }
+
+    checkLogical <- function(a, name = "") {
+        if (!("logical" %in% class(a))) {
+            cat(paste("Warning: argument ", name, " is a logical and should be set to TRUE for FALSE.",
+                sep = ""))
+            return(FALSE)
+        } else {
+            return(TRUE)
+        }
+
+    }
+
+
+    localWeights <- weights # avoids CRAN warning about deep assignment from treatment existing separately as argument and field
+
+    ## Check arguments:
+
+    ## Zelig generally accepts formula names of variables present in dataset, but survey
+    ## package looks for formula expressions or data frames, so make conversion of any
+    ## character arguments.
+    ids <- recastString2Formula(ids)
+    probs <- recastString2Formula(probs)
+    # Convert to vector from data frame as formula experssion for weights was
+    # not being passed
+    localWeights <- extract_vector(localWeights)
+    #localWeights <- recastString2Formula(localWeights)
+    strata <- recastString2Formula(strata)
+    fpc <- recastString2Formula(fpc)
+    checkforerror <- checkLogical(nest, "nest")
+    checkforerror <- checkLogical(check.strata, "check.strata")
+    repweights <- recastString2Formula(repweights)
+    # type should be a string
+    checkforerror <- checkLogical(combined.weights, "combined.weights")
+    # rho is shrinkage factor scale is scaling constant rscales is scaling constant
+
+    if (is.null(repweights)) {
+        design <- survey::svydesign(data = data, ids = ids, probs = probs,
+                                    strata = strata, fpc = fpc, nest = nest,
+                                    check.strata = check.strata,
+                                    weights = localWeights)
+    } else {
+        design <- survey::svrepdesign(data = data, repweights = repweights,
+                                        type = type, weights = localWeights,
+                                        combined.weights = combined.weights,
+                                        rho = rho,
+                                        bootstrap.average = bootstrap.average,
+            scale = scale, rscales = rscales, fpctype = fpctype, fpc = fpc)
+    }
+
+    .self$model.call <- as.call(list(.self$fn,
+                                formula = .self$zelig.call$formula,
+                                design = design))  # fn will be set again by super, but initialized here for clarity
+    .self$model.call$family <- call(.self$family, .self$link)
+
+    callSuper(formula = formula, data = data, weights = localWeights, ...,
+              by = by, bootstrap = bootstrap)
+})
diff --git a/R/model-timeseries.R b/R/model-timeseries.R
new file mode 100755
index 0000000..36485d4
--- /dev/null
+++ b/R/model-timeseries.R
@@ -0,0 +1,208 @@
+#' Time-series models in Zelig
+#'
+#' @import methods
+#' @export Zelig-timeseries
+#' @exportClass Zelig-timeseries
+#'
+#' @include model-zelig.R
+ztimeseries <- setRefClass("Zelig-timeseries",
+                    contains = "Zelig",
+                    fields = list(link = "character",
+                                  linkinv = "function"))
+
+
+ztimeseries$methods(
+  initialize = function() {
+    callSuper()
+    .self$packageauthors <- "R Core Team"
+    .self$modelauthors <- "James Honaker"
+    .self$acceptweights <- FALSE  #  Need to deal with block bootstrap
+    .self$category <- "timeseries"
+    .self$setx.labels <- list(ev  = "Expected Values: E(Y|X)",
+                              ev1 = "Expected Values: E(Y|X1)",
+                              pv  = "Predicted Values: Y|X",
+                              pv1 = "Predicted Values: Y|X1",
+                              fd  = "First Differences: E(Y|X1) - E(Y|X)",
+                              acf = "Autocorrelation Function",
+                              ev.shortrun = "Expected Values Immediately Resulting from Shock",
+                              ev.longrun = "Long Run Expected Values after Innovation",
+                              pv.shortrun = "Predicted Values Immediately Resulting from Shock",
+                              pv.longrun = "Long Run Predicted Values after Innovation",
+                              evseries.shock = "Expected Values Over Time from Shock",
+                              evseries.innovation ="Expected Values Over Time from Innovation",
+                              pvseries.shock = "Predicted Values Over Time from Shock",
+                              pvseries.innovation ="Predicted Values Over Time from Innovation")
+    warning("++++ All Zelig time series models are deprecated ++++",
+            call. = FALSE)
+  }
+)
+
+ztimeseries$methods(
+  zelig = function(formula, data, order = c(1, 0, 0), ts = NULL, cs = NULL, ...,
+                   weights = NULL, by = NULL, bootstrap = FALSE){
+
+    localBy <- by     # avoids CRAN warning about deep assignment from by existing separately as argument and field
+
+    if (identical(class(data), "function"))
+        stop("data not found.", call. = FALSE)
+    else
+        localData <- data # avoids CRAN warning about deep assignment from data existing separately as argument and field
+
+    if(!identical(bootstrap, FALSE)){
+         stop("Error: The bootstrap is not implemented for time-series models",
+              call. = FALSE)
+    }
+    if (!is.null(cs) && is.null(ts))
+        stop("ts must be specified if cs is specified.", call. = FALSE)
+    if (!is.null(cs) && !is.null(by)) {
+            stop("cs and by are equivalent for this model. Only one needs to be specified.",
+                 call. = FALSE)
+    }
+
+    .self$zelig.call <- match.call(expand.dots = TRUE)
+    if(identical(.self$name,"ar")){
+      order <- c(1,0,0)
+      .self$zelig.call$order <- order
+    } else if(identical(.self$name,"ma")){
+      order <- c(0,0,1)
+      .self$zelig.call$order <- order
+    } else {
+        dots <- list(...)
+        if (!is.null(dots$order)) {
+            order <- dots$order
+        }
+        .self$zelig.call$order <- order
+    }
+    .self$model.call <- .self$zelig.call
+
+    ## Sort dataset by time and cross-section
+    ## Should add checks that ts, cs, are valid, and consider how to interact with by.
+    ## This follows handling from Amelia::prep.r, which also has code to deal with lags, should we add those.
+    if(!is.null(ts)){
+      .self$model.call$ts <- NULL
+      if (!is.null(cs)) {
+        .self$model.call$cs <- NULL
+        tsarg<-list(localData[,cs],localData[,ts])
+        localBy <- cs  # Use by architecture to deal with cross-sections in time-series models that do not support such.  Currently overrides.
+      } else {
+        tsarg<-list(localData[,ts])
+      }
+
+      tssort <- do.call("order",tsarg)
+      localData <- localData[tssort,]
+    }
+
+    ## ts and cs are used to reorganize dataset, and do not get further passed on to Super
+    callSuper(formula = formula, data = localData, order=order, ...,
+              weights = weights, by = localBy, bootstrap = FALSE)
+  }
+)
+
+# replace packagename method as stats::arima() has a second layer of wrapping in zeligArimaWrapper().
+
+ztimeseries$methods(
+  packagename = function() {
+    "Automatically retrieve wrapped package name"
+    return("stats")
+  }
+)
+
+
+# replace simx method to add ACF as QI.
+
+ztimeseries$methods(
+  simx = function() {
+    base_vals <- .self$set() # generate mm of all averages
+
+    d <- zelig_mutate(.self$zelig.out, simparam = .self$simparam$simparam)
+    d <- zelig_mutate(d, mm = base_vals$mm)
+    d <- zelig_mutate(d, mm1 = .self$setx.out$x$mm)
+
+
+    .self$sim.out$x <-  d %>%
+        do(qi = .self$qi(.$simparam, .$mm, .$mm1)) %>%
+        do(acf = .$qi$acf,
+           ev = .$qi$ev,
+           pv = .$qi$pv,
+           ev.shortrun = .$qi$ev.shortrun,
+           pv.shortrun = .$qi$pv.shortrun,
+           ev.longrun = .$qi$ev.longrun,
+           pv.longrun = .$qi$pv.longrun,
+           pvseries.shock = .$qi$pvseries.shock,
+           evseries.shock = .$qi$evseries.shock,
+           pvseries.innovation = .$qi$pvseries.innovation,
+           evseries.innovation = .$qi$evseries.innovation)
+
+    d <- zelig_mutate(.self$sim.out$x, ev0 = .self$sim.out$x$ev)    # Eventually, when ev moves, then this path for ev0 changes.  (Or make movement happen after fd calculation.)
+    d <- d %>%
+        do(fd = .$ev.longrun - .$ev0)
+    .self$sim.out$x <- zelig_mutate(.self$sim.out$x, fd = d$fd) #JH
+  }
+)
+
+ztimeseries$methods(
+  simx1 = function() {
+    d <- zelig_mutate(.self$zelig.out, simparam = .self$simparam$simparam)
+    d <- zelig_mutate(d, mm = .self$setx.out$x$mm)
+    d <- zelig_mutate(d, mm1 = .self$setx.out$x1$mm)
+
+#      return(list(acf = acf, ev = ev, pv = pv, pv.shortrun=pv.shortrun, pv.longrun=pv.longrun, ev.shortrun=ev.shortrun, ev.longrun=ev.longrun,
+#                pvseries.shock=yseries$y.shock, pvseries.innovation=yseries$y.innovation,
+#                evseries.shock=yseries$ev.shock, evseries.innovation=yseries$ev.innovation))
+
+    .self$sim.out$x1 <-  d %>%
+      do(qi = .self$qi(.$simparam, .$mm, .$mm1)) %>%
+      do(acf = .$qi$acf,
+         ev = .$qi$ev,
+         pv = .$qi$pv,
+         ev.shortrun = .$qi$ev.shortrun,
+         pv.shortrun = .$qi$pv.shortrun,
+         ev.longrun = .$qi$ev.longrun,
+         pv.longrun = .$qi$pv.longrun,
+         pvseries.shock = .$qi$pvseries.shock,
+         evseries.shock = .$qi$evseries.shock,
+         pvseries.innovation = .$qi$pvseries.innovation,
+         evseries.innovation = .$qi$evseries.innovation)
+      # Will eventually have to then move acf, ev, and pv from .self$setx.out$x1 to .self$setx.out$x
+      # This will also effect next line:
+
+    d <- zelig_mutate(.self$sim.out$x1, ev0 = .self$sim.out$x1$ev)    # Eventually, when ev moves, then this path for ev0 changes.  (Or make movement happen after fd calculation.)
+    d <- d %>%
+      do(fd = .$ev.longrun - .$ev0)
+    .self$sim.out$x1 <- zelig_mutate(.self$sim.out$x1, fd = d$fd) #JH
+  }
+)
+
+# replace sim method to skip {simx, simx1, simrange, simrange1} methods as they are not separable
+# instead go directly to qi method
+
+ztimeseries$methods(
+  sim = function(num = 1000) {
+    "Timeseries Method for Computing and Organizing Simulated Quantities of Interest"
+    if (length(.self$num) == 0)
+      .self$num <- num
+    .self$simparam <- .self$zelig.out %>%
+      do(simparam = .self$param(.$z.out))
+
+    # NOTE difference here from standard Zelig approach.
+    # Normally these are done in sequence, but now we do one or the other.
+    if (.self$bsetx1) {
+      .self$simx1()
+    } else {
+      .self$simx()
+    }
+  }
+)
+
+# There is no fitting summary function for objects of class Arima.
+# So this passes the object through to print, and z$summary() is essentially print(summary(x)).
+
+#' Summary of an object of class Arima
+#' @method summary Arima
+#' @param object An object of class Arima
+#' @param ... Additional parameters
+#' @return The original object
+#' @export
+
+
+summary.Arima = function(object, ...) object
diff --git a/R/model-tobit-bayes.R b/R/model-tobit-bayes.R
new file mode 100644
index 0000000..d8e18b5
--- /dev/null
+++ b/R/model-tobit-bayes.R
@@ -0,0 +1,138 @@
+#' Bayesian Tobit Regression
+#'
+#' @param formula a symbolic representation of the model to be
+#'   estimated, in the form \code{y ~ x1 + x2}, where \code{y} is the
+#'   dependent variable and \code{x1} and \code{x2} are the explanatory
+#'   variables, and \code{y}, \code{x1}, and \code{x2} are contained in the
+#'   same dataset. (You may include more than two explanatory variables,
+#'   of course.) The \code{+} symbol means ``inclusion'' not
+#'   ``addition.'' You may also include interaction terms and main
+#'   effects in the form \code{x1*x2} without computing them in prior
+#'   steps; \code{I(x1*x2)} to include only the interaction term and
+#'   exclude the main effects; and quadratic terms in the form
+#'   \code{I(x1^2)}.
+#' @param model the name of a statistical model to estimate.
+#'   For a list of other supported models and their documentation see:
+#'   \url{http://docs.zeligproject.org/articles/}.
+#' @param data the name of a data frame containing the variables
+#'   referenced in the formula or a list of multiply imputed data frames
+#'   each having the same variable names and row numbers (created by
+#'   \code{Amelia} or \code{\link{to_zelig_mi}}).
+#' @param ... additional arguments passed to \code{zelig},
+#'   relevant for the model to be estimated.
+#' @param by a factor variable contained in \code{data}. If supplied,
+#'   \code{zelig} will subset
+#'   the data frame based on the levels in the \code{by} variable, and
+#'   estimate a model for each subset. This can save a considerable amount of
+#'   effort. You may also use \code{by} to run models using MatchIt
+#'   subclasses.
+#' @param cite If is set to 'TRUE' (default), the model citation will be printed
+#'   to the console.
+#' @param below: point at which the dependent variable is censored from below.
+#'     If the dependent variable is only censored from above, set \code{below = -Inf}.
+#'     The default value is 0.
+#' @param above: point at which the dependent variable is censored from above.
+#'      If the dependent variable is only censored from below, set \code{above = Inf}.
+#'      The default value is \code{Inf}.
+#' @details
+#' Additional parameters avaialable to this model include:
+#' \itemize{
+#'   \item \code{weights}: vector of weight values or a name of a variable in the dataset
+#'   by which to weight the model. For more information see:
+#'   \url{http://docs.zeligproject.org/articles/weights.html}.
+#'   \item \code{burnin}: number of the initial MCMC iterations to be discarded (defaults to 1,000).
+#'   \item \code{mcmc}: number of the MCMC iterations after burnin (defaults to 10,000).
+#'   \item \code{thin}: thinning interval for the Markov chain. Only every thin-th
+#'   draw from the Markov chain is kept. The value of mcmc must be divisible by this value.
+#'   The default value is 1.
+#'   \item \code{verbose}: defaults to FALSE. If TRUE, the progress of the sampler (every 10\%)
+#'   is printed to the screen.
+#'   \item \code{seed}: seed for the random number generator. The default is \code{NA} which
+#'   corresponds to a random seed of 12345.
+#'   \item \code{beta.start}: starting values for the Markov chain, either a scalar or
+#'   vector with length equal to the number of estimated coefficients. The default is
+#'   \code{NA}, such that the maximum likelihood estimates are used as the starting values.
+#' }
+#' Use the following parameters to specify the model's priors:
+#' \itemize{
+#'     \item \code{b0}: prior mean for the coefficients, either a numeric vector or a scalar.
+#'     If a scalar value, that value will be the prior mean for all the coefficients.
+#'     The default is 0.
+#'     \item \code{B0}: prior precision parameter for the coefficients, either a square matrix
+#'     (with the dimensions equal to the number of the coefficients) or a scalar.
+#'     If a scalar value, that value times an identity matrix will be the prior precision parameter.
+#'     The default is 0, which leads to an improper prior.
+#'     \item \code{c0}: \code{c0}/2 is the shape parameter for the Inverse Gamma prior on the variance of the
+#'     disturbance terms.
+#'     \item \code{d0}: \code{d0}/2 is the scale parameter for the Inverse Gamma prior on the variance of the
+#'     disturbance terms.
+#' }
+#' @return Depending on the class of model selected, \code{zelig} will return
+#'   an object with elements including \code{coefficients}, \code{residuals},
+#'   and \code{formula} which may be summarized using
+#'   \code{summary(z.out)} or individually extracted using, for example,
+#'   \code{coef(z.out)}. See
+#'   \url{http://docs.zeligproject.org/articles/getters.html} for a list of
+#'   functions to extract model components. You can also extract whole fitted
+#'   model objects using \code{\link{from_zelig_model}}.
+#' @param below: point at which the dependent variable is censored from below. If the dependent variable is only censored from above, set below = -Inf. The default value is 0.
+#' @param above: point at which the dependent variable is censored from above. If the dependent variable is only censored from below, set above = Inf. The default value is Inf.
+#'
+#' @examples
+#' data(turnout)
+#' z.out <- zelig(vote ~ race + educate, model = "tobit.bayes",data = turnout, verbose = FALSE)
+#'
+#' @seealso Vignette: \url{http://docs.zeligproject.org/articles/zelig_tobitbayes.html}
+#' @import methods
+#' @export Zelig-tobit-bayes
+#' @exportClass Zelig-tobit-bayes
+#'
+#' @include model-zelig.R
+#' @include model-bayes.R
+#' @include model-tobit.R
+
+ztobitbayes <- setRefClass("Zelig-tobit-bayes",
+                           contains = c("Zelig-bayes",
+                                        "Zelig-tobit"))
+
+ztobitbayes$methods(
+  initialize = function() {
+    callSuper()
+    .self$name <- "tobit-bayes"
+    .self$year <- 2013
+    .self$category <- "dichotomous"
+    .self$authors <- "Ben Goodrich, Ying Lu"
+    .self$description = "Bayesian Tobit Regression for a Censored Dependent Variable"
+    .self$fn <- quote(MCMCpack::MCMCtobit)
+    # JSON from parent
+    .self$wrapper <- "tobit.bayes"
+  }
+)
+
+ztobitbayes$methods(
+  param = function(z.out) {
+    if (length(.self$below) == 0)
+      .self$below <- 0
+    if (length(.self$above) == 0)
+      .self$above <- Inf
+    simparam.local <- list()
+    simparam.local$simparam <- z.out[, 1:(ncol(z.out) - 1)]
+    simparam.local$simalpha <- sqrt(z.out[, ncol(z.out)])
+    return(simparam.local)
+  }
+)
+
+ztobitbayes$methods(
+  mcfun = function(x, b0=0, b1=1, alpha=1, sim=TRUE){
+    mu <- b0 + b1 * x
+    ystar <- rnorm(n=length(x), mean=mu, sd=alpha)
+    if(sim){
+        y <- (ystar>0) * ystar  # censoring from below at zero
+        return(y)
+    }else{
+        y.uncensored.hat.tobit<- mu + dnorm(mu, mean=0, sd=alpha)/pnorm(mu, mean=0, sd=alpha)
+        y.hat.tobit<- y.uncensored.hat.tobit * (1- pnorm(0, mean=mu, sd=alpha) )  # expected value of censored outcome
+        return(y.hat.tobit)
+    }
+  }
+)
diff --git a/R/model-tobit.R b/R/model-tobit.R
new file mode 100755
index 0000000..34af5fe
--- /dev/null
+++ b/R/model-tobit.R
@@ -0,0 +1,178 @@
+#' Linear Regression for a Left-Censored Dependent Variable
+#' @param formula a symbolic representation of the model to be
+#'   estimated, in the form \code{y ~ x1 + x2}, where \code{y} is the
+#'   dependent variable and \code{x1} and \code{x2} are the explanatory
+#'   variables, and \code{y}, \code{x1}, and \code{x2} are contained in the
+#'   same dataset. (You may include more than two explanatory variables,
+#'   of course.) The \code{+} symbol means ``inclusion'' not
+#'   ``addition.'' You may also include interaction terms and main
+#'   effects in the form \code{x1*x2} without computing them in prior
+#'   steps; \code{I(x1*x2)} to include only the interaction term and
+#'   exclude the main effects; and quadratic terms in the form
+#'   \code{I(x1^2)}.
+#' @param model the name of a statistical model to estimate.
+#'   For a list of other supported models and their documentation see:
+#'   \url{http://docs.zeligproject.org/articles/}.
+#' @param data the name of a data frame containing the variables
+#'   referenced in the formula or a list of multiply imputed data frames
+#'   each having the same variable names and row numbers (created by
+#'   \code{Amelia} or \code{\link{to_zelig_mi}}).
+#' @param ... additional arguments passed to \code{zelig},
+#'   relevant for the model to be estimated.
+#' @param by a factor variable contained in \code{data}. If supplied,
+#'   \code{zelig} will subset
+#'   the data frame based on the levels in the \code{by} variable, and
+#'   estimate a model for each subset. This can save a considerable amount of
+#'   effort. You may also use \code{by} to run models using MatchIt
+#'   subclasses.
+#' @param cite If is set to 'TRUE' (default), the model citation will be printed
+#'   to the console.
+#'@param below (defaults to 0) The point at which the dependent variable is censored from below.
+#'  If any values in the dependent variable are observed to be less than the censoring point,
+#'  it is assumed that that particular observation is censored from below at the observed value.
+#'@param above (defaults to 0) The point at which the dependent variable is censored from above
+#'  If any values in the dependent variable are observed to be more than the censoring point,
+#'  it is assumed that that particular observation is censored from above at the observed value.
+#'@param robust defaults to FALSE. If TRUE, \code{zelig()} computes robust standard errors based on
+#'  sandwich estimators and the options selected in cluster.
+#'@param cluster if robust = TRUE, you may select a variable to define groups of correlated
+#'  observations. Let x3 be a variable that consists of either discrete numeric values, character
+#'  strings, or factors that define strata. Then z.out <- zelig(y ~ x1 + x2, robust = TRUE,
+#'  cluster = "x3", model = "tobit", data = mydata)means that the observations can be correlated
+#'  within the strata defined by the variable x3, and that robust standard errors should be
+#'  calculated according to those clusters. If robust = TRUE but cluster is not specified,
+#'  zelig() assumes that each observation falls into its own cluster.
+#'
+#' @details
+#' Additional parameters avaialable to this model include:
+#' \itemize{
+#'   \item \code{weights}: vector of weight values or a name of a variable in the dataset
+#'   by which to weight the model. For more information see:
+#'   \url{http://docs.zeligproject.org/articles/weights.html}.
+#'   \item \code{bootstrap}: logical or numeric. If \code{FALSE} don't use bootstraps to
+#'   robustly estimate uncertainty around model parameters due to sampling error.
+#'   If an integer is supplied, the number of boostraps to run.
+#'   For more information see:
+#'   \url{http://docs.zeligproject.org/articles/bootstraps.html}.
+#' }
+#'
+#' @return Depending on the class of model selected, \code{zelig} will return
+#'   an object with elements including \code{coefficients}, \code{residuals},
+#'   and \code{formula} which may be summarized using
+#'   \code{summary(z.out)} or individually extracted using, for example,
+#'   \code{coef(z.out)}. See
+#'   \url{http://docs.zeligproject.org/articles/getters.html} for a list of
+#'   functions to extract model components. You can also extract whole fitted
+#'   model objects using \code{\link{from_zelig_model}}.
+#'
+#' @examples
+#' library(Zelig)
+#' data(tobin)
+#' z.out <- zelig(durable ~ age + quant, model = "tobit", data = tobin)
+#' summary(z.out)
+#'
+#' @seealso  Vignette: \url{http://docs.zeligproject.org/articles/zelig_tobit.html}
+#' @import methods
+#' @export Zelig-tobit
+#' @exportClass Zelig-tobit
+#'
+#' @include model-zelig.R
+
+ztobit <- setRefClass("Zelig-tobit",
+                      contains = "Zelig",
+                      fields = list(above = "numeric",
+                                    below = "numeric"))
+
+ztobit$methods(
+  initialize = function() {
+    callSuper()
+    .self$name <- "tobit"
+    .self$authors <- "Kosuke Imai, Gary King, Olivia Lau"
+    .self$packageauthors <- "Christian Kleiber and Achim Zeileis"
+    .self$year <- 2011
+    .self$description = "Linear regression for Left-Censored Dependent Variable"
+    .self$fn <- quote(AER::tobit)
+    # JSON
+    .self$outcome <- "continous"
+    .self$wrapper <- "tobit"
+    .self$acceptweights <- TRUE
+  }
+)
+
+ztobit$methods(
+  zelig = function(formula, ..., below = 0, above = Inf,
+                   robust = FALSE, data, weights = NULL, by = NULL, bootstrap = FALSE) {
+    .self$zelig.call <- match.call(expand.dots = TRUE)
+    .self$model.call <- .self$zelig.call
+    .self$below <- below
+    .self$above <- above
+    .self$model.call$below <- NULL
+    .self$model.call$above <- NULL
+    .self$model.call$left <- below
+    .self$model.call$right <- above
+    callSuper(formula = formula, data = data, ..., weights = weights, by = by, bootstrap = bootstrap)
+
+    if(!robust){
+        fn2 <- function(fc, data) {
+            fc$data <- data
+            return(fc)
+        }
+        robust.model.call <- .self$model.call
+        robust.model.call$robust <- TRUE
+
+        robust.zelig.out <- .self$data %>%
+        group_by_(.self$by) %>%
+        do(z.out = eval(fn2(robust.model.call, quote(as.data.frame(.))))$var )
+
+        .self$test.statistics<- list(robust.se = robust.zelig.out$z.out)
+    }
+  }
+)
+
+
+ztobit$methods(
+  param = function(z.out, method="mvn") {
+    if(identical(method,"mvn")){
+      mu <- c(coef(z.out), log(z.out$scale))
+      simfull <- mvrnorm(n = .self$num, mu = mu, Sigma = vcov(z.out))
+      simparam.local <- as.matrix(simfull[, -ncol(simfull)])
+      simalpha <- exp(as.matrix(simfull[, ncol(simfull)]))
+      simparam.local <- list(simparam = simparam.local, simalpha = simalpha)
+      return(simparam.local)
+    } else if(identical(method,"point")){
+      return(list(simparam = t(as.matrix(coef(z.out))), simalpha = log(z.out$scale) ))
+    }
+  }
+)
+
+ztobit$methods(
+  qi = function(simparam, mm) {
+    Coeff <- simparam$simparam %*% t(mm)
+    SD <- simparam$simalpha
+    alpha <- simparam$simalpha
+    lambda <- dnorm(Coeff / SD) / (pnorm(Coeff / SD))
+    ev <- pnorm(Coeff / SD) * (Coeff + SD * lambda)
+    pv <- ev
+    pv <- matrix(nrow = nrow(ev), ncol = ncol(ev))
+    for (j in 1:ncol(ev)) {
+      pv[, j] <- rnorm(nrow(ev), mean = ev[, j], sd = SD)
+      pv[, j] <- pmin(pmax(pv[, j], .self$below), .self$above)
+    }
+    return(list(ev = ev, pv = pv))
+  }
+)
+
+ztobit$methods(
+  mcfun = function(x, b0=0, b1=1, alpha=1, sim=TRUE){
+    mu <- b0 + b1 * x
+    ystar <- rnorm(n=length(x), mean=mu, sd=alpha)
+    if(sim){
+        y <- (ystar>0) * ystar  # censoring from below at zero
+        return(y)
+    }else{
+        y.uncensored.hat.tobit<- mu + dnorm(mu, mean=0, sd=alpha)/pnorm(mu, mean=0, sd=alpha)
+        y.hat.tobit<- y.uncensored.hat.tobit * (1- pnorm(0, mean=mu, sd=alpha) )  # expected value of censored outcome
+        return(y.hat.tobit)
+    }
+  }
+)
diff --git a/R/model-weibull.R b/R/model-weibull.R
new file mode 100644
index 0000000..152dfb5
--- /dev/null
+++ b/R/model-weibull.R
@@ -0,0 +1,183 @@
+#' Weibull Regression for Duration Dependent Variables
+#'
+#'
+#' @param formula a symbolic representation of the model to be
+#'   estimated, in the form \code{y ~ x1 + x2}, where \code{y} is the
+#'   dependent variable and \code{x1} and \code{x2} are the explanatory
+#'   variables, and \code{y}, \code{x1}, and \code{x2} are contained in the
+#'   same dataset. (You may include more than two explanatory variables,
+#'   of course.) The \code{+} symbol means ``inclusion'' not
+#'   ``addition.'' You may also include interaction terms and main
+#'   effects in the form \code{x1*x2} without computing them in prior
+#'   steps; \code{I(x1*x2)} to include only the interaction term and
+#'   exclude the main effects; and quadratic terms in the form
+#'   \code{I(x1^2)}.
+#' @param model the name of a statistical model to estimate.
+#'   For a list of other supported models and their documentation see:
+#'   \url{http://docs.zeligproject.org/articles/}.
+#' @param data the name of a data frame containing the variables
+#'   referenced in the formula or a list of multiply imputed data frames
+#'   each having the same variable names and row numbers (created by
+#'   \code{Amelia} or \code{\link{to_zelig_mi}}).
+#' @param ... additional arguments passed to \code{zelig},
+#'   relevant for the model to be estimated.
+#' @param by a factor variable contained in \code{data}. If supplied,
+#'   \code{zelig} will subset
+#'   the data frame based on the levels in the \code{by} variable, and
+#'   estimate a model for each subset. This can save a considerable amount of
+#'   effort. You may also use \code{by} to run models using MatchIt
+#'   subclasses.
+#' @param cite If is set to 'TRUE' (default), the model citation will be printed
+#'   to the console.
+#'
+#' @details
+#' In addition to the standard inputs, zelig() takes the following
+#' additional options for weibull regression:
+#' \itemize{
+#'     \item \code{robust}: defaults to FALSE. If TRUE, zelig() computes
+#'     robust standard errors based on sandwich estimators based on the options in cluster.
+#'     \item \code{cluste}r: if \code{robust = TRUE}, you may select a variable
+#'     to define groups of correlated observations. Let x3 be a variable
+#'     that consists of either discrete numeric values, character strings,
+#'      or factors that define strata. Then
+#'              \code{z.out <- zelig(y ~ x1 + x2, robust = TRUE, cluster = "x3",
+#'                model = "exp", data = mydata)}
+#'     means that the observations can be correlated within the strata defined
+#'     by the variable x3, and that robust standard errors should be calculated according to
+#'     those clusters. If robust=TRUErobust=TRUE but cluster is not specified, zelig() assumes
+#'     that each observation falls into its own cluster.
+#' }
+#'
+#' Additional parameters avaialable to this model include:
+#' \itemize{
+#'   \item weights: vector of weight values or a name of a variable in the dataset
+#'   by which to weight the model. For more information see:
+#'   \url{http://docs.zeligproject.org/articles/weights.html}.
+#'   \item bootstrap: logical or numeric. If \code{FALSE} don't use bootstraps to
+#'   robustly estimate uncertainty around model parameters due to sampling error.
+#'   If an integer is supplied, the number of boostraps to run.
+#'   For more information see:
+#'   \url{http://docs.zeligproject.org/articles/bootstraps.html}.
+#' }
+#'
+#' @return Depending on the class of model selected, \code{zelig} will return
+#'   an object with elements including \code{coefficients}, \code{residuals},
+#'   and \code{formula} which may be summarized using
+#'   \code{summary(z.out)} or individually extracted using, for example,
+#'   \code{coef(z.out)}. See
+#'   \url{http://docs.zeligproject.org/articles/getters.html} for a list of
+#'   functions to extract model components. You can also extract whole fitted
+#'   model objects using \code{\link{from_zelig_model}}.
+#'
+#' @examples
+#' data(coalition)
+#' z.out <- zelig(Surv(duration, ciep12) ~ fract + numst2,model = "weibull", data = coalition)
+#'
+#' @seealso Vignette: \url{http://docs.zeligproject.org/articles/zelig_weibull.html}
+#' @import methods
+#' @export Zelig-tobit-bayes
+#' @exportClass Zelig-tobit-bayes
+#'
+#' @include model-zelig.R
+zweibull <- setRefClass("Zelig-weibull",
+                        contains = "Zelig",
+                        fields = list(simalpha = "list",
+                                      linkinv = "function",
+                                      lambda = "ANY"))
+
+zweibull$methods(
+  initialize = function() {
+    callSuper()
+    .self$name <- "weibull"
+    .self$authors <- "Olivia Lau, Kosuke Imai, Gary King"
+    .self$packageauthors <- "Terry M Therneau, and Thomas Lumley"
+    .self$year <- 2007
+    .self$description <- "Weibull Regression for Duration Dependent Variables"
+    .self$fn <- quote(survival::survreg)
+    .self$linkinv <- survreg.distributions[["weibull"]]$itrans
+    # JSON
+    .self$outcome <- "bounded"
+    .self$wrapper <- "weibull"
+    .self$acceptweights <- TRUE
+  }
+)
+
+zweibull$methods(
+  zelig = function(formula, ..., robust = FALSE, cluster = NULL, data, weights = NULL, by = NULL, bootstrap = FALSE) {
+
+    localFormula <- formula # avoids CRAN warning about deep assignment from formula existing separately as argument and field
+    .self$zelig.call <- match.call(expand.dots = TRUE)
+    .self$model.call <- .self$zelig.call
+    if (!(is.null(cluster) || robust))
+      stop("If cluster is specified, then `robust` must be TRUE")
+    # Add cluster term
+    if (robust || !is.null(cluster))
+      localFormula <- cluster.formula(localFormula, cluster)
+    .self$model.call$dist <- "weibull"
+    .self$model.call$model <- FALSE
+    callSuper(formula = localFormula, data = data, ..., robust = robust,
+              cluster = cluster,  weights = weights, by = by, bootstrap = bootstrap)
+
+    if(!robust){
+      fn2 <- function(fc, data) {
+        fc$data <- data
+        return(fc)
+      }
+      robust.model.call <- .self$model.call
+      robust.model.call$robust <- TRUE
+
+      robust.zelig.out <- .self$data %>%
+      group_by_(.self$by) %>%
+      do(z.out = eval(fn2(robust.model.call, quote(as.data.frame(.))))$var )
+
+      .self$test.statistics<- list(robust.se = robust.zelig.out$z.out)
+    }
+  }
+)
+
+zweibull$methods(
+  param = function(z.out, method="mvn") {
+    if(identical(method,"mvn")){
+      coeff <- coef(z.out)
+      mu <- c(coeff, log(z.out$scale) )  # JH this is the scale of the vcov used below
+      cov <- vcov(z.out)
+      simulations <- mvrnorm(.self$num, mu = mu, Sigma = cov)
+      simparam.local <- as.matrix(simulations[, 1:length(coeff)])
+      simalpha.local <- as.matrix(simulations[, (length(coeff)+1)])
+      simparam.local <- list(simparam = simparam.local, simalpha = simalpha.local)
+      return(simparam.local)
+    } else if(identical(method,"point")){
+      return(list(simparam = t(as.matrix(coef(z.out))), simalpha = log(z.out$scale)))
+    }
+  }
+)
+
+zweibull$methods(
+  qi = function(simparam, mm) {
+    eta <- simparam$simparam %*% t(mm)
+    theta <- as.matrix(apply(eta, 2, linkinv))
+    ev <- theta * gamma(1 + exp(simparam$simalpha))
+    pv <- as.matrix(rweibull(length(ev), shape = 1/exp(simparam$simalpha), scale = theta))
+    return(list(ev = ev, pv = pv))
+  }
+)
+
+zweibull$methods(
+  mcfun = function(x, b0=0, b1=1, alpha=1, sim=TRUE){
+    .self$mcformula <- as.Formula("Surv(y.sim, event) ~ x.sim")
+
+
+    mylambda <-exp(b0 + b1 * x)
+    event <- rep(1, length(x))
+    y.sim <- rweibull(n=length(x), shape=alpha, scale=mylambda)
+    y.hat <- mylambda * gamma(1 + (1/alpha))
+
+    if(sim){
+        mydata <- data.frame(y.sim=y.sim, event=event, x.sim=x)
+        return(mydata)
+    }else{
+        mydata <- data.frame(y.hat=y.hat, event=event, x.seq=x)
+        return(mydata)
+    }
+  }
+)
diff --git a/R/model-zelig.R b/R/model-zelig.R
new file mode 100755
index 0000000..64060b2
--- /dev/null
+++ b/R/model-zelig.R
@@ -0,0 +1,1605 @@
+#' Zelig reference class
+#'
+#' Zelig website: \url{https://zeligproject.org/}
+#'
+#' @import methods
+#' @export Zelig
+#' @exportClass Zelig
+#'
+#' @field fn R function to call to wrap
+#' @field formula Zelig formula
+#' @field weights [forthcoming]
+#' @field name name of the Zelig model
+#' @field data data frame or matrix
+#' @field by split the data by factors
+#' @field mi work with imputed dataset
+#' @field idx model index
+#' @field zelig.call Zelig function call
+#' @field model.call wrapped function call
+#' @field zelig.out estimated zelig model(s)
+#' @field setx.out set values
+#' @field setx.labels pretty-print qi
+#' @field bsetx is x set?
+#' @field bsetx1 is x1 set?
+#' @field bsetrange is range set?
+#' @field bsetrange1 is range1 set?
+#' @field range range
+#' @field range1 range1
+#' @field test.statistics list of test statistics
+#' @field sim.out simulated qi's
+#' @field simparam simulated parameters
+#' @field num  number of simulations
+#' @field authors Zelig model authors
+#' @field zeligauthors Zelig authors
+#' @field modelauthors wrapped model authors
+#' @field packageauthors wrapped package authors
+#' @field refs citation information
+#' @field year model is released
+#' @field description model description
+#' @field url model URL
+#' @field url.docs model documentation URL
+#' @field category model category
+#' @field vignette.url vignette URL
+#' @field json JSON export
+#' @field ljson JSON export
+#' @field outcome JSON export
+#' @field wrapper JSON export
+#' @field explanatory JSON export
+#' @field mcunit.test unit testing
+#' @field with.feedback Feedback
+#' @field robust.se return robust standard errors
+
+z <- setRefClass("Zelig", fields = list(fn = "ANY", # R function to call to wrap
+                                        formula = "ANY", # Zelig formula
+                                        weights = "ANY",
+                                        acceptweights = "logical",
+                                        name = "character", # name of the Zelig model
+                                        data = "ANY", # data frame or matrix,
+                                        originaldata = "ANY", # data frame or matrix,
+                                        originalweights = "ANY",
+                                        # ddata = "ANY",
+                                        # data.by = "ANY", # data frame or matrix
+                                        by = "ANY",
+                                        mi = "logical",
+                                        matched = "logical",
+
+                                        avg = "ANY",
+
+                                        idx = "ANY", # model index
+
+                                        zelig.call = "call", # Zelig function call
+                                        model.call = "call", # wrapped function call
+                                        zelig.out = "ANY", # estimated zelig model(s)
+                                        signif.stars = "logical",
+                                        signif.stars.default = "logical", # significance stars default
+
+                                        setx.out = "ANY", # set values
+                                        setx.labels = "list", # pretty-print qi,
+                                        bsetx = "logical",
+                                        bsetx1 = "logical",
+                                        bsetrange = "logical",
+                                        bsetrange1 = "logical",
+                                        range = "ANY",
+                                        range1 = "ANY",
+                                        setforeveryby = "logical",
+
+                                        test.statistics = "ANY",
+
+                                        sim.out = "list", # simulated qi's
+                                        simparam = "ANY", # simulated parameters
+                                        num = "numeric", # nb of simulations
+                                        bootstrap = "logical", # use bootstrap
+                                        bootstrap.num = "numeric", # number of bootstraps to use
+
+                                        authors = "character", # Zelig model description
+                                        zeligauthors = "character",
+                                        modelauthors = "character",
+                                        packageauthors = "character",
+                                        refs = "ANY", # is there a way to recognize class "bibentry"?,
+
+                                        year = "numeric",
+                                        description = "character",
+                                        url = "character",
+                                        url.docs = "character",
+                                        category = "character",
+
+                                        vignette.url = "character",
+
+                                        json = "ANY", # JSON export
+                                        ljson = "ANY",
+                                        outcome = "ANY",
+                                        wrapper = "character",
+                                        explanatory = "ANY",
+
+                                        #Unit Testing
+                                        mcunit.test = "ANY",
+                                        mcformula = "ANY",
+
+                                        # Feedback
+                                        with.feedback = "logical",
+
+                                        # Robust standard errors
+                                        robust.se = "logical"
+                                        ))
+
+z$methods(
+  initialize = function() {
+    .self$authors <- "Kosuke Imai, Gary King, and Olivia Lau"
+    .self$zeligauthors <- "Christine Choirat, Christopher Gandrud, James Honaker, Kosuke Imai, Gary King, and Olivia Lau"
+    .self$refs <- bibentry()
+    .self$year <- as.numeric(format(Sys.Date(), "%Y"))
+    .self$url <- "https://zeligproject.org/"
+    .self$url.docs <- "http://docs.zeligproject.org/articles/"
+    .self$setx.out <- list()
+    .self$setx.labels <- list(ev  = "Expected Values: E(Y|X)",
+                              ev1 = "Expected Values: E(Y|X1)",
+                              pv  = "Predicted Values: Y|X",
+                              pv1 = "Predicted Values: Y|X1",
+                              fd  = "First Differences: E(Y|X1) - E(Y|X)")
+    .self$bsetx <- FALSE
+    .self$bsetx1 <- FALSE
+    .self$bsetrange <- FALSE
+    .self$bsetrange1 <- FALSE
+    .self$acceptweights <- FALSE
+
+    .self$bootstrap <- FALSE
+    .self$bootstrap.num <- 100
+    # JSON
+    .self$vignette.url <- paste(.self$url.docs, tolower(class(.self)[1]), ".html", sep = "")
+    .self$vignette.url <- sub("-gee", "gee", .self$vignette.url)
+    .self$vignette.url <- sub("-bayes", "bayes", .self$vignette.url)
+    # .self$vignette.url <- paste(.self$url.docs, "zelig-", sub("-", "", .self$name), ".html", sep = "")
+    .self$category <- "undefined"
+    .self$explanatory <- c("continuous",
+                           "discrete",
+                           "nominal",
+                           "ordinal",
+                           "binary")
+    .self$outcome <- ""
+    .self$wrapper <- "wrapper"
+    # Is 'ZeligFeedback' package installed?
+    .self$with.feedback <- "ZeligFeedback" %in% installed.packages()
+    .self$setforeveryby <- TRUE
+
+    .self$avg <- function(val) {
+      if (is.numeric(val))
+        mean(val)
+      else if (is.ordered(val))
+        Median(val)
+      else
+        Mode(val)
+    }
+  }
+)
+
+z$methods(
+    packagename = function() {
+        "Automatically retrieve wrapped package name"
+        # If this becomes "quote(mypackage::myfunction) then
+        # regmatches(.self$fn,regexpr("(?<=\\()(.*?)(?=\\::)",.self$fn, perl=TRUE))
+        # would extract "mypackage"
+        return(as.character(.self$fn)[2])
+    }
+)
+
+z$methods(
+    cite = function() {
+        "Provide citation information about Zelig and Zelig model, and about wrapped package and wrapped model"
+        title <- paste(.self$name, ": ", .self$description, sep="")
+        localauthors <- ""
+        if (length(.self$modelauthors) & (!identical(.self$modelauthors,""))){
+            # covers both empty styles: character(0) and "" --the latter being length 1.
+            localauthors<-.self$modelauthors
+        } else if (length(.self$packageauthors) & (!identical(.self$packageauthors,""))){
+            localauthors<-.self$packageauthors
+        } else {
+            localauthors<-.self$zeligauthors
+        }
+        cat("How to cite this model in Zelig:\n  ",
+            localauthors, ". ", .self$year, ".\n  ", title,
+            "\n  in ", .self$zeligauthors,
+            ",\n  \"Zelig: Everyone's Statistical Software,\" ",
+            .self$url, "\n", sep = "")
+    }
+)
+
+# Construct a reference list specific to a Zelig model
+# Styles available from the bibentry print method: "text", "Bibtex", "citation", "html", "latex", "R", "textVersion"
+# The "sphinx" style reformats "text" style with some markdown substitutions
+
+z$methods(
+    references = function(style="sphinx") {
+        "Construct a reference list specific to a Zelig model."
+        mystyle <- style
+        if (mystyle=="sphinx"){
+            mystyle <- "text"
+        }
+        mycites<-.self$refs
+        if(!is.na(.self$packagename() )) {
+            mycites <- c(mycites, citation(.self$packagename()))
+            # Concatentate model specific Zelig references with package references
+        }
+        mycites<-mycites[!duplicated(mycites)]
+        # Remove duplicates (many packages have duplicate references in their lists)
+        s <- capture.output(print(mycites, style = mystyle))
+        if(style == "sphinx"){
+            # format the "text" style conventions for sphinx markdown for
+            # building docs for zeligproject.org
+            s<-gsub("\\*","\\*\\*",s, perl=TRUE)
+            s<-gsub("_","\\*",s, perl=TRUE)
+            s<-gsub("\\*\\(","\\* \\(",s, perl=TRUE)
+        }
+        cat(s, sep="\n")
+    }
+)
+
+#' Zelig method
+#' @param formula TEST
+
+z$methods(
+  zelig = function(formula, data, model = NULL, ...,
+                   weights = NULL, by, bootstrap = FALSE) {
+    "The zelig function estimates a variety of statistical models"
+
+    fn2 <- function(fc, data) {
+      fc$data <- data
+      return(fc)
+    }
+
+    # Prepare data for possible transformations
+    if ("amelia" %in% class(data)) {
+        localdata <- data$imputations
+        is_matched <- FALSE
+    }
+    else if ("matchit" %in% class(data)) {
+        is_matched <- TRUE
+        localdata <- MatchIt::match.data(data)
+        iweights <- localdata$weights
+    }
+    else {
+        localdata <- data
+        is_matched <- FALSE
+    }
+
+    # Without dots for single and multiple equations
+    temp_formula <- as.Formula(formula)
+    if (sum(length(temp_formula)) <= 2)
+        .self$formula <- as.Formula(terms(temp_formula,
+                                    data = localdata))
+    else if (sum(length(temp_formula)) > 2) {
+        f_dots <- attr(terms(temp_formula, data = localdata), "Formula_without_dot")
+        if (!is.null(f_dots))
+           # .self$formula <- as.Formula(f_dots)
+           stop('formula expansion not currently supported for formulas with multiple equations.\nPlease directly specify the variables in the formula call.',
+                call. = FALSE)
+        else
+            .self$formula <- as.Formula(formula)
+    }
+
+    # Convert factors and logs converted internally to the zelig call
+    form_factors <- transformer(.self$formula, FUN = 'factor', check = TRUE)
+    form_logs <- transformer(.self$formula, FUN = 'log', check = TRUE)
+    if (any(c(form_factors, form_logs))) {
+        if (form_factors) {
+            localformula <- transformer(formula, data = localdata,
+                                        FUN = 'factor', f_out = TRUE)
+            localdata <- transformer(formula, data = localdata,
+                                     FUN = 'factor', d_out = TRUE)
+            .self$formula <- localformula
+            .self$data <- localdata
+        }
+        if (form_logs) {
+            if (.self$name == 'ivreg')
+                stop('logging values in the zelig call is not currently supported for ivreg models.',
+                     call. = FALSE)
+            localformula <- transformer(formula, data = localdata,
+                                        FUN = 'log', f_out = TRUE)
+            localdata <- transformer(formula, data = localdata,
+                                     FUN = 'log', d_out = TRUE)
+            .self$formula <- localformula
+            .self$data <- localdata
+        }
+    }
+
+    if (!("relogit" %in% .self$wrapper))
+        .self$model.call$formula <- match.call(zelig, .self$formula)
+    else if ("relogit" %in% .self$wrapper) {
+        .self$modcall_formula_transformer()
+    }
+
+    # Overwrite formula with mc unit test formula into correct environment, if it exists
+    # Requires fixing R scoping issue
+    if("formula" %in% class(.self$mcformula)){
+        .self$formula <- as.Formula( deparse(.self$mcformula),
+                                    env = environment(.self$formula) )
+        .self$model.call$formula <- as.Formula( deparse(.self$mcformula),
+                                               env = globalenv() )
+    } else if(is.character(.self$mcformula)) {
+        .self$formula <- as.Formula( .self$mcformula,
+                                    env = environment(.self$formula) )
+        .self$model.call$formula <- as.Formula( .self$mcformula,
+                                                env = globalenv() )
+    }
+    if(!is.null(model)){
+        cat("Argument model is only valid for the Zelig wrapper, but not the Zelig method, and will be ignored.\n")
+        flag <- !(names(.self$model.call) == "model")
+        .self$model.call <- .self$model.call[flag]
+        flag <- !(names(.self$zelig.call) == "model")
+        .self$zelig.call <- .self$zelig.call[flag]
+    }
+
+    .self$by <- by
+    .self$originaldata <- localdata
+    .self$originalweights <- weights
+    datareformed <- FALSE
+
+    if(is.numeric(bootstrap)){
+        .self$bootstrap <- TRUE
+        .self$bootstrap.num <- bootstrap
+    } else if(is.logical(bootstrap)){
+        .self$bootstrap <- bootstrap
+    }
+    # Remove bootstrap argument from model call
+    .self$model.call$bootstrap <- NULL
+    # Check if bootstrap possible by checking whether param method has method argument available
+    if(.self$bootstrap){
+        if(!("method" %in% names(formals(.self$param)))){
+            stop("The bootstrap does not appear to be implemented for this Zelig model. Check that the param() method allows point predictions.")
+        }
+        .self$setforeveryby <- FALSE  # compute covariates in set() at the dataset-level
+    }
+
+
+    # Matched datasets from MatchIt
+    if (is_matched){
+        .self$matched <- TRUE
+        .self$data <- localdata
+        datareformed <- TRUE
+
+        # Check if noninteger valued weights exist and are incompatible with zelig model
+        validweights <- TRUE
+        if(!.self$acceptweights){           # This is a convoluted way to do this, but avoids the costly "any()" calculation if not necessary
+            if(any(iweights != ceiling(iweights))){  # any(y != ceiling(y)) tests slightly faster than all(y == ceiling(y))
+                validweights <- FALSE
+            }
+        }
+        if(!validweights){   # could also be  if((!acceptweights) & (any(iweights != ceiling(iweights))  but avoid the long any for big datasets
+            cat("The weights created by matching for this dataset have noninteger values,\n",
+                "however, the statistical model you have chosen is only compatible with integer weights.\n",
+                "Either change the matching method (such as to `optimal' matching with a 1:1 ratio)\n",
+                "or change the statistical model in Zelig.\n",
+                "We will round matching weights up to integers to proceed.\n\n")
+            .self$weights <- ceiling(iweights)
+        } else {
+            .self$weights <- iweights
+        }
+
+        # Set references appropriate to matching methods used
+        .self$refs <- c(.self$refs, citation("MatchIt"))
+        if(m.out$call$method=="cem" & ("cem" %in% installed.packages()))
+            .self$refs <- c(.self$refs, citation("cem"))
+            #if(m.out$call$method=="exact") .self$refs <- c(.self$refs, citation(""))
+        if((m.out$call$method=="full") & ("optmatch" %in% installed.packages()))
+            .self$refs <- c(.self$refs, citation("optmatch"))
+        if(m.out$call$method=="genetic" & ("Matching" %in% installed.packages()))
+            .self$refs <- c(.self$refs, citation("Matching"))
+        #if(m.out$call$method=="nearest") .self$refs <- c(.self$refs, citation(""))
+        if(m.out$call$method=="optimal" & ("optmatch" %in% installed.packages()))
+            .self$refs <- c(.self$refs, citation("optmatch"))
+            #if(m.out$call$method=="subclass") .self$refs <- c(.self$refs, citation(""))
+    } else {
+        .self$matched  <- FALSE
+    }
+
+    # Multiply Imputed datasets from Amelia or mi utility
+    # Notice imputed objects ignore weights currently,
+    # which is reasonable as the Amelia package ignores weights
+    if (("amelia" %in% class(localdata)) | ("mi" %in% class(localdata))) {
+        idata <- localdata
+        .self$data <- bind_rows(lapply(seq(length(idata)),
+                                    function(imputationNumber)
+                                        cbind(imputationNumber,
+                                            idata[[imputationNumber]])))
+        if (!is.null(weights))
+            stop('weights are currently not available with imputed data.',
+                    call. = FALSE)
+        .self$weights <- NULL  # This should be considered or addressed
+        datareformed <- TRUE
+        .self$by <- c("imputationNumber", by)
+        .self$mi <- TRUE
+        .self$setforeveryby <- FALSE  # compute covariates in set() at on the entire stacked dataset
+        .self$refs <- c(.self$refs, citation("Amelia"))
+
+        if (.self$fn == "geepack::geeglm" & is.character(.self$model.call$id)) {
+            .self$model.call$id <- subset(.self$data,
+                              imputationNumber == 1)[, .self$model.call$id]
+        }
+
+    } else {
+        .self$mi <- FALSE
+    }
+
+    if (!datareformed){
+        .self$data <- localdata
+        # If none of the above package integrations have already reformed the
+        # data from another object, use the supplied data
+
+        # Run some checking on weights argument, and see if is valid string or vector
+        if(!is.null(weights)){
+            if(is.character(weights)){
+                if(weights %in% names(.self$data)){
+                    .self$weights <- .self$data[[weights]]  # This is a way to convert data.frame portion to type numeric (as data.frames are lists)
+                    } else {
+                        warning("Variable name given for weights not found in dataset, so will be ignored.\n\n",
+                            call. = FALSE)
+                        .self$weights <- NULL  # No valid weights
+                        .self$model.call$weights <- NULL
+                    }
+            }
+            else if(is.vector(weights)){
+                if (length(weights) == nrow(.self$data) & is.vector(weights)){
+                    localWeights <- weights
+                    # avoids CRAN warning about deep assignment from weights existing separately as argument and field
+                    if(min(localWeights) < 0) {
+                        localWeights[localWeights < 0] <- 0
+                        warning("Negative valued weights were supplied and will be replaced with zeros.",
+                            call. = FALSE)
+                    }
+                .self$weights <- localWeights # Weights
+                } else {
+                    warning("Length of vector given for weights is not equal to number of observations in dataset, and will be ignored.\n\n",
+                        call. = FALSE)
+                    .self$weights <- NULL # No valid weights
+                    .self$model.call$weights <- NULL
+                }
+            } else {
+                warning("Supplied weights argument is not a vector or a variable name in the dataset, and will be ignored.\n\n",
+                    call. = FALSE)
+                .self$weights <- NULL # No valid weights
+                .self$model.call$weights <- NULL
+            }
+        } else {
+            .self$weights <- NULL  # No weights set, so weights are NULL
+            .self$model.call$weights <- NULL
+        }
+    }
+
+    # If the Zelig model does not not accept weights, but weights are provided, we rebuild the data
+    #   by bootstrapping using the weights as probabilities
+    #   or by duplicating rows proportional to the ceiling of their weight
+    # Otherwise we pass the weights to the model call
+    if(!is.null(.self$weights)){
+        if ((!.self$acceptweights)){
+            .self$buildDataByWeights2()
+            # Could use alternative method $buildDataByWeights() for duplication
+            # approach.  Maybe set as argument?\
+            .self$model.call$weights <- NULL
+        } else {
+            .self$model.call$weights <- .self$weights
+            # NEED TO CHECK THIS IS THE NAME FOR ALL MODELS, or add more generic
+            # field containing the name for the weights argument
+        }
+    }
+
+    if (.self$bootstrap){
+        .self$buildDataByBootstrap()
+    }
+
+    .self$model.call[[1]] <- .self$fn
+    .self$model.call$by <- NULL
+    if (is.null(.self$by)) {
+        .self$data <- cbind(1, .self$data)
+        names(.self$data)[1] <- "by"
+        .self$by <- "by"
+    }
+
+    #cat("zelig.call:\n")
+    #print(.self$zelig.call)
+    #cat("model.call:\n")
+    #print(.self$model.call)
+    .self$data <- tbl_df(.self$data)
+    #.self$zelig.out <- eval(fn2(.self$model.call, data = data)) # shortened test version that bypasses "by"
+    .self$zelig.out <- .self$data %>%
+        group_by_(.self$by) %>%
+        do(z.out = eval(fn2(.self$model.call,
+            quote(as.data.frame(.)))))
+    }
+)
+
+z$methods(
+  set = function(..., fn = list(numeric = mean, ordered = Median)) {
+    "Setting Explanatory Variable Values"
+    is_uninitializedField(.self$zelig.out)
+    is_zeligei(.self)
+
+    # Find variable transformations in formula call
+#    coef_names <- names(rm_intercept(unlist(.self$get_coef())))
+
+    .self$avg <- function(val) {
+        if (is.numeric(val))
+            ifelse(is.null(fn$numeric), mean(val), fn$numeric(val))
+        else if (is.ordered(val))
+            ifelse(is.null(fn$ordered), Median(val), fn$ordered(val))
+        else
+            Mode(val)
+    }
+    s <- list(...)
+
+    # This eliminates warning messages when factor rhs passed to lm() model in reduce() utility function
+    if(.self$category == "multinomial"){  # Perhaps find more robust way to test if dep.var. is factor
+      f2 <- update(.self$formula, as.numeric(.) ~ .)
+    } else {
+      f2 <- .self$formula
+    }
+
+    f <- update(.self$formula, 1 ~ .)
+    # update <- na.omit(.self$data) %>% # remove missing values
+
+    # compute on each slice of the dataset defined by "by"
+    if(.self$setforeveryby){
+      update <- .self$data %>%
+        group_by_(.self$by) %>%
+        do(mm = model.matrix(f, reduce(dataset = "MEANINGLESS ARGUMENT", s,
+                                       formula = f2,
+                                       data = ., avg = .self$avg))) # fix in last argument from data=.self$data to data=.  (JH)
+
+      # compute over the entire dataset  - currently used for mi and bootstrap.  Should be opened up to user.
+    } else {
+      if(.self$bootstrap){
+        flag <- .self$data$bootstrapIndex == (.self$bootstrap.num + 1) # These are the original observations
+        tempdata <- .self$data[flag,]
+      } else {
+        tempdata <- .self$data # presently this is for mi.  And this is then the entire stacked dataset.
+      }
+
+      allreduce <- reduce(dataset = "MEANINGLESS ARGUMENT", s,
+                          formula = f2,
+                          data = tempdata,
+                          avg = .self$avg)
+      allmm <- model.matrix(f, allreduce)
+      update <- .self$data %>%
+        group_by_(.self$by) %>%
+        do(mm = allmm)
+    }
+    return(update)
+  }
+)
+
+z$methods(
+  setx = function(..., fn = list(numeric = mean, ordered = Median,
+                                 other = Mode)) {
+    is_uninitializedField(.self$zelig.out)
+    is_zeligei(.self)
+
+    .self$bsetx <- TRUE
+    .self$setx.out$x  <- .self$set(..., fn = fn)
+  }
+)
+
+z$methods(
+  setx1 = function(..., fn = list(numeric = mean, ordered = Median,
+                                  other = Mode)) {
+    .self$bsetx1 <- TRUE
+    .self$setx.out$x1 <- .self$set(...)
+  }
+)
+
+z$methods(
+  setrange = function(..., fn = list(numeric = mean, ordered = Median,
+                                     other = Mode)) {
+    is_uninitializedField(.self$zelig.out)
+
+    .self$bsetrange <- TRUE
+    rng <- list()
+    s <- list(...)
+    m <- expand_grid_setrange(s)
+    .self$range <- m
+    .self$setx.out$range <- list()
+    for (i in 1:nrow(m)) {
+      l <- as.list(as.list(m[i, ]))
+      names(l) <- names(m)
+      .self$setx.out$range[[i]] <- .self$set(l)
+    }
+  }
+)
+
+z$methods(
+  setrange1 = function(..., fn = list(numeric = mean, ordered = Median,
+                                      other = Mode)) {
+    .self$bsetrange1 <- TRUE
+    rng <- list()
+    s <- list(...)
+    m <- expand_grid_setrange(s)
+    .self$range1 <- m
+    .self$setx.out$range1 <- list()
+    for (i in 1:nrow(m)) {
+      l <- as.list(as.list(m[i, ]))
+      names(l) <- names(m)
+      .self$setx.out$range1[[i]] <- .self$set(l)
+    }
+  }
+)
+
+z$methods(
+  param = function(z.out, method = "mvn") {
+    if(identical(method,"mvn")){
+      return(mvrnorm(.self$num, coef(z.out), vcov(z.out)))
+    } else if(identical(method,"point")){
+      return(t(as.matrix(coef(z.out))))
+    } else {
+      stop("param called with method argument of undefined type.")
+    }
+  }
+)
+
+z$methods(
+  sim = function(num = NULL) {
+    "Generic Method for Computing and Organizing Simulated Quantities of Interest"
+    is_zelig(.self)
+    is_uninitializedField(.self$zelig.out)
+    is_zeligei(.self)
+
+    ## If num is defined by user, it overrides the value stored in the .self$num field.
+    ## If num is not defined by user, but is also not yet defined in .self$num, then it defaults to 1000.
+
+    localNum <- num # avoids CRAN warning about deep assignment from num existing separately as argument and field
+    if (length(.self$num) == 0){
+      if(is.null(localNum)){
+        localNum <- 1000
+      }
+    }
+    if(!is.null(localNum)){
+      .self$num <- localNum
+    }
+
+    # This was previous version, that assumed sim only called once, or only method to access/write .self$num field:
+    #if (length(.self$num) == 0)
+    #  .self$num <- num
+
+    # Divide simulations among imputed datasets
+    if(.self$mi){
+      am.m <- length(.self$get_coef())
+      .self$num <- ceiling(.self$num/am.m)
+    }
+    # If bootstrapped, use distribution of estimated parameters,
+    #  otherwise use $param() method for parametric bootstrap.
+    if (.self$bootstrap & ! .self$mi){
+      .self$num <- 1
+      .self$simparam <- .self$zelig.out %>%
+        do(simparam = .self$param(.$z.out, method = "point"))
+    } else {
+      .self$simparam <- .self$zelig.out %>%
+        do(simparam = .self$param(.$z.out))
+    }
+
+    if (.self$bsetx)
+      .self$simx()
+    if (.self$bsetx1)
+      .self$simx1()
+    if (.self$bsetrange)
+      .self$simrange()
+    if (.self$bsetrange1)
+      .self$simrange1()
+
+    #if (is.null(.self$sim.out$x) & is.null(.self$sim.out$range))
+    if (!isTRUE(is_sims_present(.self$sim.out, fail = FALSE)))
+      warning('No simulations drawn, likely due to insufficient inputs.',
+              call. = FALSE)
+  }
+)
+
+z$methods(
+  simx = function() {
+    d <- zelig_mutate(.self$zelig.out, simparam = .self$simparam$simparam)
+    d <- zelig_mutate(d, mm = .self$setx.out$x$mm)
+    .self$sim.out$x <-  d %>%
+      do(qi = .self$qi(.$simparam, .$mm)) %>%
+      do(ev = .$qi$ev, pv = .$qi$pv)
+  }
+)
+
+z$methods(
+  simx1 = function() {
+    d <- zelig_mutate(.self$zelig.out, simparam = .self$simparam$simparam)
+    d <- zelig_mutate(d, mm = .self$setx.out$x1$mm)
+    .self$sim.out$x1 <-  d %>%
+      do(qi = .self$qi(.$simparam, .$mm)) %>%
+      do(ev = .$qi$ev, pv = .$qi$pv)
+    d <- zelig_mutate(.self$sim.out$x1, ev0 = .self$sim.out$x$ev)
+    d <- d %>%
+      do(fd = .$ev - .$ev0)
+    .self$sim.out$x1 <- zelig_mutate(.self$sim.out$x1, fd = d$fd) #JH
+  }
+)
+
+z$methods(
+  simrange = function() {
+    .self$sim.out$range <- list()
+    for (i in 1:nrow(.self$range)) {
+      d <- zelig_mutate(.self$zelig.out, simparam = .self$simparam$simparam)
+      d <- zelig_mutate(d, mm = .self$setx.out$range[[i]]$mm)
+      .self$sim.out$range[[i]] <-  d %>%
+        do(qi = .self$qi(.$simparam, .$mm)) %>%
+        do(ev = .$qi$ev, pv = .$qi$pv)
+    }
+  }
+)
+
+z$methods(
+  simrange1 = function() {
+    .self$sim.out$range1 <- list()
+    for (i in 1:nrow(.self$range1)) {
+      d <- zelig_mutate(.self$zelig.out, simparam = .self$simparam$simparam)
+      d <- zelig_mutate(d, mm = .self$setx.out$range1[[i]]$mm)
+      .self$sim.out$range1[[i]] <-  d %>%
+        do(qi = .self$qi(.$simparam, .$mm)) %>%
+        do(ev = .$qi$ev, pv = .$qi$pv)
+    }
+  }
+)
+
+
+
+z$methods(
+  simx = function() {
+    d <- zelig_mutate(.self$zelig.out, simparam = .self$simparam$simparam)
+    d <- zelig_mutate(d, mm = .self$setx.out$x$mm)
+    .self$sim.out$x <-  d %>%
+      do(qi = .self$qi(.$simparam, .$mm)) %>%
+      do(ev = .$qi$ev, pv = .$qi$pv)
+  }
+)
+
+
+z$methods(
+  ATT = function(treatment, treated = 1, quietly = TRUE, num = NULL) {
+    "Generic Method for Computing Simulated (Sample) Average Treatment Effects on the Treated"
+
+    ## Checks on user provided arguments
+    if(!is.character(treatment)){
+      stop("Argument treatment should be the name of the treatment variable in the dataset.")
+    }
+    if(!(treatment %in% names(.self$data))){
+      stop(cat("Specified treatment variable", treatment, "is not in the dataset."))
+    }
+    # Check treatment variable included in model.
+    # Check treatment variable is 0 or 1 (or generalize to dichotomous).
+    # Check argument "treated" is 0 or 1 (or generalize to values of "treatment").
+    # Check "ev" is available QI.
+    # Check if multiple equation model (which will need method overwrite).
+
+
+    ## If num is defined by user, it overrides the value stored in the .self$num field.
+    ## If num is not defined by user, but is also not yet defined in .self$num, then it defaults to 1000.
+    localNum <- num
+    if (length(.self$num) == 0){
+      if(is.null(localNum)){
+        localNum <- 1000
+      }
+    }
+    if(!is.null(localNum)){
+      if(!identical(localNum,.self$num)){   # .self$num changed, so regenerate simparam
+        .self$num <- localNum
+        .self$simparam <- .self$zelig.out %>%
+          do(simparam = .self$param(.$z.out))
+      }
+    }
+
+    ## Extract name of dependent variable, treated units
+    depvar <- as.character(.self$zelig.call[[2]][2])
+
+    ## Use dplyr to cycle over all splits of dataset
+    ## NOTE: THIS IS GOING TO USE THE SAME simparam SET FOR EVERY SPLIT
+    .self$sim.out$TE <- .self$data %>%
+      group_by_(.self$by) %>%
+      do(ATT = .self$simATT(simparam = .self$simparam$simparam[[1]], data = . ,
+                            depvar = depvar, treatment = treatment,
+                            treated = treated) )   # z.out = eval(fn2(.self$model.call, quote(as.data.frame(.)))))
+
+    if(!quietly){
+      return(.self$sim.out$TE)  # The $get_qi() method may generalize, otherwise, write a $getter.
+    }
+  }
+)
+
+# Has calls to .self, so constructed as method rather than function internal to $ATT()
+# Function to simulate ATT
+
+z$methods(
+  simATT = function(simparam, data, depvar, treatment, treated) {
+    "Simulate an Average Treatment on the Treated"
+
+    localData <- data # avoids CRAN warning about deep assignment from data existing separately as argument and field
+    flag <- localData[[treatment]]==treated
+    localData[[treatment]] <- 1-treated
+
+    cf.mm <- model.matrix(.self$formula, localData) # Counterfactual model matrix
+    cf.mm <- cf.mm[flag,]
+
+    y1 <- localData[flag, depvar]
+    y1.n <- sum(flag)
+
+    ATT <- matrix(NA, nrow=y1.n, ncol= .self$num)
+    for(i in 1:y1.n){                   # Maybe $qi() generally works for all mm? Of all dimensions? If so, loop not needed.
+      ATT[i,] <- as.numeric(y1[i,1]) - .self$qi(simparam=simparam, mm=cf.mm[i, , drop=FALSE])$ev
+    }
+    ATT <- apply(ATT, 2, mean)
+    return(ATT)
+  }
+)
+
+z$methods(
+  get_names = function() {
+    "Return Zelig object field names"
+    z_names <- names(as.list(.self))
+    return(z_names)
+  }
+)
+
+
+z$methods(
+  show = function(signif.stars = FALSE, subset = NULL, bagging = FALSE) {
+    "Display a Zelig object"
+
+    is_uninitializedField(.self$zelig.out)
+    .self$signif.stars <- signif.stars
+    .self$signif.stars.default <- getOption("show.signif.stars")
+    options(show.signif.stars = .self$signif.stars)
+    if ("uninitializedField" %in% class(.self$zelig.out))
+      cat("Next step: Use 'zelig' method")
+    else if (length(.self$setx.out) == 0) {
+
+      #############################################################################
+      # Current workaround to display call as $zelig.call rather than $model.call
+      # This is becoming a more complex workaround than revising the summary method
+      # should improve this approach in future:
+      for(jj in 1:length(.self$zelig.out$z.out)){
+        if("S4" %in% typeof(.self$zelig.out$z.out[[jj]]) ){
+          slot(.self$zelig.out$z.out[[jj]],"call") <- .self$zelig.call
+        } else {
+          if("call" %in% names(.self$zelig.out$z.out[[jj]])){
+            .self$zelig.out$z.out[[jj]]$call <- .self$zelig.call
+          } else if ("call" %in% names(attributes(.self$zelig.out$z.out[[1]])) ){
+            attr(.self$zelig.out$z.out[[1]],"call")<- .self$zelig.call
+          }
+        }
+      }
+      ##########################################################################
+
+    if((.self$mi || .self$bootstrap) & is.null(subset)){
+        if (.self$mi)
+            cat("Model: Combined Imputations \n\n")
+        else
+            cat("Model: Combined Bootstraps \n\n")
+
+        mi_combined <- combine_coef_se(.self, messages = FALSE)
+        printCoefmat(mi_combined, P.values = TRUE, has.Pvalue = TRUE,
+                     digits = max(2, getOption("digits") - 4))
+        cat("\n")
+
+        if (.self$mi)
+            cat("For results from individual imputed datasets, use summary(x, subset = i:j)\n")
+        else
+            cat("For results from individual bootstrapped datasets, use summary(x, subset = i:j)\n")
+    } else if ((.self$mi) & !is.null(subset)) {
+            for(i in subset){
+                cat("Imputed Dataset ", i, sep = "")
+                print(base::summary(.self$zelig.out$z.out[[i]]))
+            }
+    } else if ((.self$bootstrap) & !is.null(subset)) {
+        for(i in subset){
+            cat("Bootstrapped Dataset ", i, sep = "")
+            print(base::summary(.self$zelig.out$z.out[[i]]))
+        }
+    } else {
+        summ <- .self$zelig.out %>%
+            do(summ = {cat("Model: \n")
+                if (length(.self$by) == 1) {
+                    if (.self$by == "by") {
+                    cat()
+                    }
+                    else {
+                        print(.[.self$by])
+                    }
+                } else {
+                    print(.[.self$by])
+                }
+                if("S4" %in% typeof(.$z.out)){  # Need to change summary method here for some classes
+                    print(summary(.$z.out))
+                } else {
+                    print(base::summary(.$z.out))
+                }
+            })
+    }
+
+
+      if("gim.criteria" %in% names(.self$test.statistics)){
+        if(.self$test.statistics$gim.criteria){
+          #               cat("According to the GIM-rule-of-thumb, your model probably has some type of specification error.\n",
+          #               "We suggest you run model diagnostics and seek to fix the problem.\n",
+          #               "You may also wish to run the full GIM test (which takes more time) to be sure.\n",
+          #               "See http://.... for more information.\n \n")
+          cat("Statistical Warning: The GIM test suggests this model is misspecified\n",
+              "(based on comparisons between classical and robust SE's; see http://j.mp/GIMtest).\n",
+              "We suggest you run diagnostics to ascertain the cause, respecify the model\n",
+              "and run it again.\n\n")
+        }
+      }
+
+    if (!is_zeligei(.self, fail = FALSE)) cat("Next step: Use 'setx' method\n")
+    } else if (length(.self$setx.out) != 0 & length(.self$sim.out) == 0) {
+      niceprint <- function(obj, name){
+        if(!is.null(obj[[1]])){
+          cat(name, ":\n", sep = "")
+          if (is.data.frame(obj))
+              screenoutput <- obj
+          else
+              screenoutput <- obj[[1]]
+          attr(screenoutput,"assign") <- NULL
+          print(screenoutput, digits = max(2, getOption("digits") - 4))
+        }
+      }
+      range_out <- function(x, which_range = 'range') {
+        if (!is.null(x$setx.out[[which_range]])) {
+            xvarnames <- names(as.data.frame(x$setx.out[[which_range]][[1]]$mm[[1]]))
+            d <- length(x$setx.out[[which_range]])
+            num_cols <- length(x$setx.out[[which_range]][[1]]$mm[[1]] )
+            xmatrix <- matrix(NA, nrow = d, ncol = num_cols)
+            for (i in 1:d){
+                xmatrix[i,] <- matrix(x$setx.out[[which_range]][[i]]$mm[[1]],
+                                      ncol = num_cols)
+            }
+            xdf <- data.frame(xmatrix)
+            names(xdf) <- xvarnames
+            return(xdf)
+          }
+      }
+
+      niceprint(obj=.self$setx.out$x$mm, name="setx")
+      niceprint(obj=.self$setx.out$x1$mm, name="setx1")
+      niceprint(obj = range_out(.self), name = "range")
+      niceprint(obj = range_out(.self, 'range1'), name = "range1")
+     # niceprint(obj=.self$setx.out$range[[1]]$mm, name="range")
+     #  niceprint(obj=.self$setx.out$range1[[1]]$mm, name="range1")
+      cat("\nNext step: Use 'sim' method\n")
+    } else { # sim.out
+      pstat <- function(s.out, what = "sim x") {
+        simu <- s.out %>%
+          do(simu = {cat("\n", what, ":\n")
+            cat(" -----\n")
+            cat("ev\n")
+            print(stat(.$ev, .self$num))
+            cat("pv\n")
+            print(stat(.$pv, .self$num))
+            if (!is.null(.$fd)) {
+              cat("fd\n")
+              print(stat(.$fd, .self$num))}
+          }
+          )
+      }
+      pstat(.self$sim.out$x)
+      pstat(.self$sim.out$x1, "sim x1")
+      if (!is.null(.self$setx.out$range)) {
+        for (i in seq(.self$sim.out$range)) {
+          cat("\n")
+          print(.self$range[i, ])
+          cat("\n")
+          pstat(.self$sim.out$range[[i]], "sim range")
+          cat("\n")
+        }
+      }
+      if (!is.null(.self$setx.out$range1)) {
+        for (i in seq(.self$sim.out$range1)) {
+          cat("\n")
+          print(.self$range1[i, ])
+          cat("\n")
+          pstat(.self$sim.out$range1[[i]], "sim range")
+          cat("\n")
+        }
+      }
+    }
+    options(show.signif.stars = .self$signif.stars.default)
+  }
+)
+
+z$methods(
+  graph = function(...) {
+    "Plot the quantities of interest"
+
+    is_uninitializedField(.self$zelig.out)
+    is_sims_present(.self$sim.out)
+
+    if (is_simsx(.self$sim.out, fail = FALSE)) qi.plot(.self, ...)
+    if (is_simsrange(.self$sim.out, fail = FALSE)) ci.plot(.self, ...)
+  }
+)
+
+z$methods(
+  summarize = function(...) {
+    "Display a Zelig object"
+    show(...)
+  }
+)
+
+z$methods(
+  summarise = function(...) {
+    "Display a Zelig object"
+    show(...)
+  }
+)
+
+z$methods(
+  help = function() {
+    "Open the model vignette from https://zeligproject.org/"
+    #     vignette(class(.self)[1])
+    browseURL(.self$vignette.url)
+  }
+)
+
+z$methods(
+  from_zelig_model = function() {
+    "Extract the original fitted model object from a zelig call. Note only works for models using directly wrapped functions."
+
+    is_uninitializedField(.self$zelig.out)
+    result <- try(.self$zelig.out$z.out, silent = TRUE)
+
+    if ("try-error" %in% class(result)) {
+        stop("from_zelig_model not available for this fitted model.")
+    } else {
+        if (length(result) == 1) {
+            result <- result[[1]]
+            result <- strip_package_name(result)
+        } else if (length(result) > 1) {
+            if (.self$mi) {
+                message("Returning fitted model objects for each imputed data set in a list.")
+            } else if (.self$bootstrap) {
+            message("Returning fitted model objects for each bootstrapped data set in a list.")
+            } else {
+                message("Returning fitted model objects for each subset of the data created from the 'by' argument, in a list.")
+            }
+            result <- lapply(result, strip_package_name)
+        }
+        return(result)
+    }
+})
+
+#' Method for extracting estimated coefficients from Zelig objects
+#' @param nonlist logical whethe to \code{unlist} the result if there are only
+#'   one set of coefficients. Enables backwards compatibility.
+
+z$methods(
+  get_coef = function(nonlist = FALSE) {
+    "Get estimated model coefficients"
+
+    is_uninitializedField(.self$zelig.out)
+    result <- try(lapply(.self$zelig.out$z.out, coef), silent = TRUE)
+    if ("try-error" %in% class(result))
+      stop("'coef' method' not implemented for model '", .self$name, "'")
+    else {
+        if (nonlist & length(result) == 1) result <- unlist(result)
+        return(result)
+    }
+  }
+)
+
+#' Method for extracting estimated variance covariance matrix from Zelig objects
+#' @param nonlist logical whethe to \code{unlist} the result if there are only
+#'   one set of coefficients. Enables backwards compatibility.
+
+z$methods(
+    get_vcov = function() {
+        "Get estimated model variance-covariance matrix"
+        is_uninitializedField(.self$zelig.out)
+
+        if (length(.self$robust.se) == 0) .self$robust.se <- FALSE
+
+        if (!.self$robust.se) {
+            if ("geeglm" %in% class(.self$zelig.out$z.out[[1]]))
+                result <- lapply(.self$zelig.out$z.out, vcov_gee)
+            else if ("rq" %in% class(.self$zelig.out$z.out[[1]]))
+                result <- lapply(.self$zelig.out$z.out, vcov_rq)
+            else
+                result <- lapply(.self$zelig.out$z.out, vcov)
+        }
+        else if (.self$robust.se)
+            result <- lapply(.self$zelig.out$z.out, vcovHC, "HC1")
+
+        if ("try-error" %in% class(result))
+            stop("'vcov' method' not implemented for model '", .self$name, "'")
+        else
+            return(result)
+    }
+)
+
+#' Method for extracting p-values from Zelig objects
+#' @param object an object of class Zelig
+
+z$methods(
+  get_pvalue = function() {
+    "Get estimated model p-values"
+
+    is_uninitializedField(.self$zelig.out)
+    result <- try(lapply(.self$zelig.out$z.out, p_pull), silent = TRUE)
+    if ("try-error" %in% class(result))
+      stop("'get_pvalue' method' not implemented for model '", .self$name, "'")
+    else
+      return(result)
+  }
+)
+
+#' Method for extracting standard errors from Zelig objects
+#' @param object an object of class Zelig
+
+z$methods(
+  get_se = function() {
+    "Get estimated model standard errors"
+
+    is_uninitializedField(.self$zelig.out)
+    result <- try(lapply(.self$zelig.out$z.out, se_pull), silent = TRUE)
+    if ("try-error" %in% class(result))
+      stop("'get_se' method' not implemented for model '", .self$name, "'")
+    else
+      return(result)
+  }
+)
+
+z$methods(
+  get_residuals = function(...) {
+    "Get estimated model residuals"
+
+    is_uninitializedField(.self$zelig.out)
+    result <- try(lapply(.self$zelig.out$z.out, residuals, ...), silent = TRUE)
+    if ("try-error" %in% class(result))
+      stop("'residuals' method' not implemented for model '", .self$name, "'")
+    else
+      return(result)
+  }
+)
+
+z$methods(
+  get_df_residual = function() {
+    "Get residual degrees-of-freedom"
+
+    is_uninitializedField(.self$zelig.out)
+    result <- try(lapply(.self$zelig.out$z.out, df.residual), silent = TRUE)
+    if ("try-error" %in% class(result))
+      stop("'df.residual' method' not implemented for model '", .self$name, "'")
+    else
+      return(result)
+  }
+)
+
+z$methods(
+  get_fitted = function(...) {
+    "Get estimated fitted values"
+
+    is_uninitializedField(.self$zelig.out)
+    result <- lapply(.self$zelig.out$z.out, fitted, ...)
+    if ("try-error" %in% class(result))
+      stop("'predict' method' not implemented for model '", .self$name, "'")
+    else
+      return(result)
+  }
+)
+
+z$methods(
+  get_predict = function(...) {
+    "Get predicted values"
+
+    is_uninitializedField(.self$zelig.out)
+    result <- lapply(.self$zelig.out$z.out, predict, ...)
+    if ("try-error" %in% class(result))
+      stop("'predict' method' not implemented for model '", .self$name, "'")
+    else
+      return(result)
+  }
+)
+
+z$methods(
+  get_qi = function(qi = "ev", xvalue = "x", subset = NULL) {
+    "Get quantities of interest"
+
+    is_sims_present(.self$sim.out)
+
+    possiblexvalues <- names(.self$sim.out)
+    if(!(xvalue %in% possiblexvalues)){
+      stop(paste("xvalue must be ", paste(possiblexvalues, collapse = " or ") ,
+                 ".", sep = ""))
+    }
+    possibleqivalues <- c(names(.self$sim.out[[xvalue]]),
+                          names(.self$sim.out[[xvalue]][[1]]))
+    if(!(qi %in% possibleqivalues)){
+      stop(paste("qi must be ", paste(possibleqivalues, collapse=" or ") , ".",
+                                      sep = ""))
+    }
+    if(.self$mi){
+      if(is.null(subset)){
+        am.m <- length(.self$get_coef())
+        subset <- 1:am.m
+      }
+      tempqi <- do.call(rbind, .self$sim.out[[xvalue]][[qi]][subset])
+    } else if(.self$bootstrap){
+      if(is.null(subset)){
+        subset <- 1:.self$bootstrap.num
+      }
+      tempqi <- do.call(rbind, .self$sim.out[[xvalue]][[qi]][subset])
+    } else if(xvalue %in% c("range", "range1")) {
+      tempqi <- do.call(rbind, .self$sim.out[[xvalue]])[[qi]]
+    } else {
+      tempqi<- .self$sim.out[[xvalue]][[qi]][[1]]   # also works:   tempqi <- do.call(rbind, .self$sim.out[[xvalue]][[qi]])
+    }
+    return(tempqi)
+  }
+)
+
+z$methods(
+    get_model_data = function() {
+        "Get data used to estimate the model"
+
+        is_uninitializedField(.self$zelig.out)
+        model_data <- .self$originaldata
+        return(model_data)
+    }
+)
+
+z$methods(
+  toJSON = function() {
+    "Convert Zelig object to JSON format"
+    if (!is.list(.self$json))
+      .self$json <- list()
+    .self$json$"name" <- .self$name
+    .self$json$"description" <- .self$description
+    .self$json$"outcome" <- list(modelingType = .self$outcome)
+    .self$json$"explanatory" <- list(modelingType = .self$explanatory)
+    .self$json$"vignette.url" <- .self$vignette.url
+    .self$json$"wrapper" <- .self$wrapper
+    tree <- c(class(.self)[1], .self$.refClassDef@refSuperClasses)
+    .self$json$tree <- head(tree, match("Zelig", tree) - 1)
+    .self$ljson <- .self$json
+    .self$json <- jsonlite::toJSON(json, pretty = TRUE)
+    return(.self$json)
+  }
+)
+
+# empty default data generating process to avoid error if not created as model specific method
+z$methods(
+  mcfun = function(x, ...){
+    return( rep(1,length(x)) )
+  }
+)
+
+# Monte Carlo unit test
+z$methods(
+  mcunit = function(nsim = 500, minx = -2, maxx = 2, b0 = 0, b1 = 1, alpha = 1,
+                    ci = 0.95, plot = TRUE, ...){
+    passes <- TRUE
+    n.short <- 10      # number of p
+    alpha.ci <- 1 - ci   # alpha values for ci bounds, not speed parameter
+    if (.self$name %in% "ivreg") {
+        z.sim <- runif(n = nsim, min = minx, max = maxx)
+        z.seq <- seq(from = minx, to = maxx, length = nsim)
+        h.sim <- runif(n = nsim, min = minx, max = maxx)
+        h.seq <- seq(from = minx, to = maxx, length = nsim)
+    }
+    else {
+        x.sim <- runif(n = nsim, min = minx, max = maxx)
+        x.seq <- seq(from = minx, to = maxx, length = nsim)
+    }
+
+
+    if (.self$name %in% "ivreg") {
+        data.hat <- .self$mcfun(z = z.seq, h = h.seq,
+                                b0 = b0, b1 = b1, alpha = alpha,
+                                ..., sim = FALSE)
+        x.seq <- unlist(data.hat[2])
+        data.hat <- unlist(data.hat[1])
+    }
+    else
+        data.hat <- .self$mcfun(x = x.seq, b0 = b0, b1 = b1, alpha = alpha,
+                                 ..., sim = FALSE)
+    if(!is.data.frame(data.hat)){
+        if (.self$name %in% "ivreg") {
+            data.hat <- data.frame(x.seq = x.seq, z.seq = z.seq, h.seq = h.seq,
+                                   y.hat = data.hat)
+        }
+        else
+            data.hat <- data.frame(x.seq = x.seq, y.hat = data.hat)
+    }
+    if (.self$name %in% "ivreg") {
+        data.sim <- .self$mcfun(z = z.sim, h = h.sim,
+                                b0 = b0, b1 = b1, alpha = alpha, ...,
+                                sim = TRUE)
+        x.sim <- unlist(data.hat[2])
+        data.sim <- unlist(data.hat[1])
+    }
+    else
+        data.sim <- .self$mcfun(x = x.sim, b0 = b0, b1 = b1, alpha = alpha, ...,
+                                sim = TRUE)
+    if(!is.data.frame(data.sim)){
+        if (.self$name %in% "ivreg") {
+            data.sim <- data.frame(x.sim = x.sim, z.sim = z.sim, h.sim = h.sim,
+                                   y.sim = data.sim)
+        }
+        else
+            data.sim <- data.frame(x.sim = x.sim, y.sim = data.sim)
+    }
+
+    ## Estimate Zelig model and create numerical bounds on expected values
+    # This should be the solution, but requires fixing R scoping issue:
+    #.self$zelig(y.sim~x.sim, data=data.sim)
+    # formula will be overwritten in zelig() if .self$mcformula has been set
+
+    ## Instead, remove formula field and set by hard code
+    .self$mcformula <- NULL
+    if(.self$name %in% c("exp", "weibull", "lognorm")){
+        .self$zelig(Surv(y.sim, event) ~ x.sim, data = data.sim)
+    } else if (.self$name %in% c("relogit")) {
+        tau <- sum(data.sim$y.sim)/nsim
+        .self$zelig(y.sim ~ x.sim, tau = tau, data = data.sim)
+    } else if (.self$name %in% "ivreg") {
+        .self$zelig(y.sim ~ x.sim | z.sim + h.sim, data = data.sim)
+    }
+    else {
+      .self$zelig(y.sim ~ x.sim, data = data.sim)
+    }
+
+    x.short.seq <- seq(from = minx, to = maxx, length = n.short)
+    .self$setrange(x.sim = x.short.seq)
+    .self$sim()
+
+    if (.self$name %in% c("relogit")) {
+      data.short.hat <- .self$mcfun(x = x.short.seq, b0 = b0, b1 = b1,
+          alpha = alpha, keepall = TRUE, ..., sim = FALSE)
+    } else {
+      data.short.hat <- .self$mcfun(x = x.short.seq, b0 = b0, b1 = b1,
+          alpha = alpha, ..., sim = FALSE)
+    }
+
+    if(!is.data.frame(data.short.hat)){
+      data.short.hat <- data.frame(x.seq = x.short.seq, y.hat = data.short.hat)
+    }
+
+    history.ev <- history.pv <- matrix(NA, nrow = n.short, ncol = 2)
+    for(i in 1:n.short){
+        xtemp <- x.short.seq[i]
+        .self$setx(x.sim = xtemp)
+        .self$sim()
+        #temp<-sort( .self$sim.out$x$ev[[1]] )
+        temp <- .self$sim.out$range[[i]]$ev[[1]]
+        # This is for ev's that are a probability distribution across outcomes, like ordered logit/probit
+        if(ncol(temp) > 1){
+            temp <- temp %*% as.numeric(sort(unique(data.sim$y.sim)))  #as.numeric(colnames(temp))
+        }
+        temp <- sort(temp)
+
+        # calculate bounds of expected values
+        history.ev[i,1] <- temp[max(round(length(temp)*(alpha.ci/2)),1) ]     # Lower ci bound
+        history.ev[i,2] <- temp[round(length(temp)*(1 - (alpha.ci/2)))]       # Upper ci bound
+        #temp<-sort( .self$sim.out$x$pv[[1]] )
+        temp <- sort( .self$sim.out$range[[i]]$pv[[1]] )
+
+        # check that ci contains true value
+        passes <- passes & (min(history.ev[i,]) <= data.short.hat$y.hat[i] ) &
+                           (max(history.ev[i,]) >= data.short.hat$y.hat[i] )
+
+        #calculate bounds of predicted values
+        history.pv[i,1] <- temp[max(round(length(temp)*(alpha.ci/2)),1) ]     # Lower ci bound
+        history.pv[i,2] <- temp[round(length(temp)*(1 - (alpha.ci/2)))]       # Upper ci bound
+    }
+
+    ## Plot Monte Carlo Data
+    if(plot){
+      all.main = substitute(
+        paste(modelname, "(", beta[0], "=", b0, ", ", beta[1], "=", b1,",", alpha, "=", a0, ")"),
+        list(modelname = .self$name, b0 = b0, b1=b1, a0 = alpha)
+      )
+
+      all.ylim<-c( min(c(data.sim$y.sim, data.hat$y.hat)) , max(c(data.sim$y.sim, data.hat$y.hat)) )
+
+      plot(data.sim$x.sim, data.sim$y.sim, main=all.main, ylim=all.ylim, xlab="x", ylab="y", col="steelblue")
+      par(new=TRUE)
+      plot(data.hat$x.seq, data.hat$y.hat, main="", ylim=all.ylim, xlab="", ylab="", xaxt="n", yaxt="n", type="l", col="green", lwd=2)
+
+      for(i in 1:n.short){
+        lines(x=rep(x.short.seq[i],2), y=c(history.pv[i,1],history.pv[i,2]), col="lightpink", lwd=1.6)
+        lines(x=rep(x.short.seq[i],2), y=c(history.ev[i,1],history.ev[i,2]), col="firebrick", lwd=1.6)
+      }
+    }
+    return(passes)
+
+  }
+)
+
+# rebuild dataset by duplicating observations by (rounded) weights
+z$methods(
+  buildDataByWeights = function() {
+    if(!.self$acceptweights){
+      idata <- .self$data
+      iweights <- .self$weights
+      ceilweights <- ceiling(iweights)
+      n.obs <- nrow(idata)
+      windex <- rep(1:n.obs, ceilweights)
+      idata <- idata[windex,]
+      .self$data <- idata
+      if(any(iweights != ceiling(iweights))){
+        cat("Noninteger weights were set, but the model in Zelig is only able to use integer valued weights.\n",
+            "Each weight has been rounded up to the nearest integer.\n\n")
+      }
+    }
+  }
+)
+
+# rebuild dataset by bootstrapping using weights as probabilities
+z$methods(
+  buildDataByWeights2 = function() {
+    if(!.self$acceptweights){
+      iweights <- .self$weights
+      if(any(iweights != ceiling(iweights))){
+        cat("Noninteger weights were set, but the model in Zelig is only able to use integer valued weights.\n",
+            "A bootstrapped version of the dataset was constructed using the weights as sample probabilities.\n\n")
+        idata <- .self$data
+        n.obs <- nrow(idata)
+        n.w   <- sum(iweights)
+        iweights <- iweights/n.w
+        windex <- sample(x=1:n.obs, size=n.w, replace=TRUE, prob=iweights)  # Should size be n.w or n.obs?  Relatedly, n.w might not be integer.
+        idata <- idata[windex,]
+        .self$data <- idata
+      }else{
+        .self$buildDataByWeights()  # If all weights are integers, just use duplication to rebuild dataset.
+      }
+    }
+  }
+)
+
+
+# rebuild dataset by bootstrapping using weights as probabilities
+#   might possibly combine this method with $buildDataByWeights2()
+z$methods(
+  buildDataByBootstrap = function() {
+    idata <- .self$data
+    n.boot <- .self$bootstrap.num
+    n.obs <- nrow(idata)
+
+    if(!is.null(.self$weights)){
+      iweights <- .self$weights
+      n.w   <- sum(iweights)
+      iweights <- iweights/n.w
+    } else {
+      iweights <- NULL
+    }
+
+    windex <- bootstrapIndex <- NULL
+    for(i in 1:n.boot) {
+      windex <- c(windex, sample(x=1:n.obs, size=n.obs,
+                  replace = TRUE, prob = iweights))
+      bootstrapIndex <- c(bootstrapIndex, rep(i,n.obs))
+    }
+    # Last dataset is original data
+    idata <- rbind(idata[windex,], idata)
+    bootstrapIndex <- c(bootstrapIndex, rep(n.boot+1,n.obs))
+
+    idata$bootstrapIndex <- bootstrapIndex
+    .self$data <- idata
+    .self$by <- c("bootstrapIndex", .self$by)
+  }
+)
+
+
+
+
+
+z$methods(
+  feedback = function() {
+    "Send feedback to the Zelig team"
+    if (!.self$with.feedback)
+      return("ZeligFeedback package not installed")
+    # If ZeligFeedback is installed
+    print("ZeligFeedback package installed")
+    print(ZeligFeedback::feedback(.self))
+  }
+)
+
+# z$methods(
+#   finalize = function() {
+#     if (!.self$with.feedback)
+#       return("ZeligFeedback package not installed")
+#     # If ZeligFeedback is installed
+#     print("Thanks for providing Zelig usage information")
+#     # print(ZeligFeedback::feedback(.self))
+#     write(paste("feedback", ZeligFeedback::feedback(.self)),
+#           file = paste0("test-zelig-finalize-", date(), ".txt"))
+#   }
+# )
+
+
+#' Summary method for Zelig objects
+#' @param object An Object of Class Zelig
+#' @param ... Additional parameters to be passed to summary
+setMethod("summary", "Zelig",
+          function(object, ...) {
+            object$summarize(...)
+          }
+)
+
+#' Plot method for Zelig objects
+#' @param x An Object of Class Zelig
+#' @param y unused
+#' @param ... Additional parameters to be passed to plot
+setMethod("plot", "Zelig",
+          function(x, ...) {
+            x$graph(...)
+          }
+)
+
+#' Names method for Zelig objects
+#' @param x An Object of Class Zelig
+setMethod("names", "Zelig",
+          function(x) {
+            x$get_names()
+          }
+)
+
+setGeneric("vcov")
+#' Variance-covariance method for Zelig objects
+#' @param object An Object of Class Zelig
+
+setMethod("vcov", "Zelig",
+          function(object) {
+            object$get_vcov()
+          }
+)
+
+#' Method for extracting estimated coefficients from Zelig objects
+#' @param object An Object of Class Zelig
+
+setMethod("coefficients", "Zelig",
+          function(object) {
+              object$get_coef(nonlist = TRUE)
+          }
+)
+
+setGeneric("coef")
+#' Method for extracting estimated coefficients from Zelig objects
+#' @param object An Object of Class Zelig
+
+setMethod("coef", "Zelig",
+          function(object) {
+            object$get_coef(nonlist = TRUE)
+          }
+)
+
+#' Method for extracting residuals from Zelig objects
+#' @param object An Object of Class Zelig
+setMethod("residuals", "Zelig",
+          function(object) {
+            object$get_residuals()
+          }
+)
+
+#' Method for extracting residual degrees-of-freedom from Zelig objects
+#' @param object An Object of Class Zelig
+setMethod("df.residual", "Zelig",
+          function(object) {
+            object$get_df_residual()
+          }
+)
+
+setGeneric("fitted")
+#' Method for extracting estimated fitted values from Zelig objects
+#' @param object An Object of Class Zelig
+#' @param ... Additional parameters to be passed to fitted
+setMethod("fitted", "Zelig",
+          function(object, ...) {
+            object$get_fitted(...)
+          }
+)
+
+setGeneric("predict")
+#' Method for getting predicted values from Zelig objects
+#' @param object An Object of Class Zelig
+#' @param ... Additional parameters to be passed to predict
+setMethod("predict", "Zelig",
+          function(object, ...) {
+            object$get_predict(...)
+          }
+)
diff --git a/R/plots.R b/R/plots.R
new file mode 100755
index 0000000..1d396dc
--- /dev/null
+++ b/R/plots.R
@@ -0,0 +1,1019 @@
+#' Plot Quantities of Interest in a Zelig-fashion
+#'
+#' Various graph generation for different common types of simulated results from
+#' Zelig
+#' @usage simulations.plot(y, y1=NULL, xlab="", ylab="", main="", col=NULL, line.col=NULL,
+#' axisnames=TRUE)
+#' @param y A matrix or vector of simulated results generated by Zelig, to be
+#' graphed.
+#' @param y1 For comparison of two sets of simulated results at different
+#' choices of covariates, this should be an object of the same type and
+#' dimension as y.  If no comparison is to be made, this should be NULL.
+#' @param xlab Label for the x-axis.
+#' @param ylab Label for the y-axis.
+#' @param main Main plot title.
+#' @param col A vector of colors.  Colors will be used in turn as the graph is
+#' built for main plot objects. For nominal/categorical data, this colors
+#' renders as the bar color, while for numeric data it renders as the background
+#' color.
+#' @param line.col  A vector of colors.  Colors will be used in turn as the graph is
+#' built for line color shading of plot objects.
+#' @param axisnames a character-vector, specifying the names of the axes
+#' @return nothing
+#' @author James Honaker
+simulations.plot <-function(y, y1=NULL, xlab="", ylab="", main="", col=NULL, line.col=NULL, axisnames=TRUE) {
+
+    binarytest <- function(j){
+      if(!is.null(attr(j,"levels"))) return(identical( sort(levels(j)),c(0,1)))
+      return(FALSE)
+    }
+
+
+
+    ## Univariate Plots ##
+    if(is.null(y1)){
+
+        if (is.null(col))
+        col <- rgb(100,149,237,maxColorValue=255)
+
+        if (is.null(line.col))
+        line.col <- "black"
+
+        # Integer Values
+        if ((length(unique(y))<11 & all(as.integer(y) == y)) | is.factor(y) | is.character(y)) {
+
+                if(is.factor(y) | is.character(y)){
+                    y <- as.numeric(y)
+                }
+
+                # Create a sequence of names
+                nameseq <- paste("Y=", min(y):max(y), sep="")
+
+                # Set the heights of the barplots.
+                # Note that tablar requires that all out values are greater than zero.
+                # So, we subtract the min value (ensuring everything is at least zero)
+                # then add 1
+                bar.heights <- tabulate(y - min(y) + 1) / length(y)
+
+                # Barplot with (potentially) some zero columns
+                output <- barplot(bar.heights, xlab=xlab, ylab=ylab, main=main, col=col[1],
+                    axisnames=axisnames, names.arg=nameseq)
+
+        # Vector of 1's and 0's
+        } else if(ncol(as.matrix(y))>1 & binarytest(y) ){
+
+            n.y <- nrow(y)
+            # Precedence is names > colnames > 1:n
+            if(is.null(names(y))){
+                if(is.null(colnames(y) )){
+                    all.names <- 1:n.y
+                }else{
+                    all.names <- colnames(y)
+                }
+            }else{
+                all.names <- names(y)
+            }
+
+            # Barplot with (potentially) some zero columns
+            output <- barplot( apply(y,2,sum)/n.y, xlab=xlab, ylab=ylab, main=main, col=col[1],
+                axisnames=axisnames, names.arg=all.names)
+
+        # Continuous Values
+        } else if(is.numeric(y)){
+            if(ncol(as.matrix(y))>1){
+                ncoly <- ncol(y)
+                hold.dens <- list()
+                ymax <- xmax <- xmin <- rep(0,ncol(y))
+                for(i in 1:ncoly){
+                    hold.dens[[i]] <- density(y[,i])
+                    ymax[i] <- max(hold.dens[[i]]$y)
+                    xmax[i] <- max(hold.dens[[i]]$x)
+                    xmin[i] <- min(hold.dens[[i]]$x)
+                }
+                shift <- 0:ncoly
+                all.xlim <- c(min(xmin), max(xmax))
+                all.ylim <- c(0,ncoly)
+
+                # Precedence is names > colnames > 1:n
+                if(is.null(names(y))){
+                    if(is.null(colnames(y) )){
+                        all.names <- 1:ncoly
+                    }else{
+                        all.names <- colnames(y)
+                    }
+                }else{
+                    all.names <- names(y)
+                }
+                shrink <- 0.9
+                for(i in 1:ncoly ){
+                    if(i<ncoly){
+                        output <- plot(hold.dens[[i]]$x, shrink*hold.dens[[i]]$y/ymax[i] + shift[i], xaxt="n", yaxt="n", xlab="", ylab="", main="", col=line.col[1], xlim=all.xlim, ylim=all.ylim, type="l")
+                        if(!identical(col[1],"n")){
+                            polygon(hold.dens[[i]]$x, shrink*hold.dens[[i]]$y/ymax[i] + shift[i], col=col[1])
+                        }
+                        abline(h=shift[i+1])
+                        text(x=all.xlim[1], y=(shift[i] + shift[i+1])/2, labels=all.names[i], pos=4)
+                        par(new=TRUE)
+                    }else{
+                        output <- plot(hold.dens[[i]]$x, shrink*hold.dens[[i]]$y/ymax[i] + shift[i], yaxt="n", xlab=xlab, ylab=ylab, main=main, col=line.col[1], xlim=all.xlim, ylim=all.ylim, type="l")
+                        if(!identical(col[1],"n")){
+                            polygon(hold.dens[[i]]$x, shrink*hold.dens[[i]]$y/ymax[i] + shift[i], col=col[1])
+                        }
+                        text(x=all.xlim[1], y=(shift[i] + shift[i+1])/2, labels=all.names[i], pos=4)
+                    }
+                }
+
+            }else{
+                den.y <- density(y)
+                output <- plot(den.y, xlab=xlab, ylab=ylab, main=main, col=line.col[1])
+                if(!identical(col[1],"n")){
+                    polygon(den.y$x, den.y$y, col=col[1])
+                }
+            }
+        }
+
+    ## Comparison Plots ##
+
+    }else{
+
+        # Integer - Plot and shade a matrix
+        if(( length(unique(y))<11 & all(as.integer(y) == y) ) | is.factor(y) | is.character(y)){
+
+            if(is.factor(y) | is.character(y)){
+                y <- as.numeric(y)
+                y1 <- as.numeric(y1)
+            }
+
+            yseq<-min(c(y,y1)):max(c(y,y1))
+            nameseq<- paste("Y=",yseq,sep="")
+            n.y<-length(yseq)
+
+            colors<-rev(heat.colors(n.y^2))
+            lab.colors<-c("black","white")
+            comp<-matrix(NA,nrow=n.y,ncol=n.y)
+
+            for(i in 1:n.y){
+                for(j in 1:n.y){
+                    flag<- y==yseq[i] & y1==yseq[j]
+                    comp[i,j]<-mean(flag)
+                }
+            }
+
+            old.pty<-par()$pty
+            old.mai<-par()$mai
+
+            par(pty="s")
+            par(mai=c(0.3,0.3,0.3,0.1))
+
+            image(z=comp, axes=FALSE, col=colors, zlim=c(min(comp),max(comp)),main=main )
+
+            locations.x<-seq(from=0,to=1,length=nrow(comp))
+            locations.y<-locations.x
+
+            for(m in 1:n.y){
+                for(n in 1:n.y){
+                    text(x=locations.x[m],y=locations.y[n],labels=paste(round(100*comp[m,n])), col=lab.colors[(comp[m,n]> ((max(comp)-min(comp))/2) )+1])
+                }
+            }
+
+            axis(side=1,labels=nameseq, at=seq(0,1,length=n.y), cex.axis=1, las=1)
+            axis(side=2,labels=nameseq, at=seq(0,1,length=n.y), cex.axis=1, las=3)
+            box()
+            par(pty=old.pty,mai=old.mai)
+        ##  Two Vectors of 1's and 0's
+        }else if( ncol(as.matrix(y))>1 & binarytest(y) & ncol(as.matrix(y1))>1 & binarytest(y1)   )  {
+
+            # Everything in this section assumes ncol(y)==ncol(y1)
+
+            # Precedence is names > colnames > 1:n
+            if(is.null(names(y))){
+                if(is.null(colnames(y) )){
+                    nameseq <- 1:n.y
+                }else{
+                    nameseq <- colnames(y)
+                }
+            }else{
+                nameseq <- names(y)
+            }
+
+            n.y <- ncol(y)
+            yseq <- 1:n.y
+
+            y <- y %*% yseq
+            y1 <- y1 %*% yseq
+
+            ## FROM HERE ON -- Replicates above.  Should address more generically
+            colors<-rev(heat.colors(n.y^2))
+            lab.colors<-c("black","white")
+            comp<-matrix(NA,nrow=n.y,ncol=n.y)
+
+            for(i in 1:n.y){
+                for(j in 1:n.y){
+                    flag<- y==yseq[i] & y1==yseq[j]
+                    comp[i,j]<-mean(flag)
+                }
+            }
+
+            old.pty<-par()$pty
+            old.mai<-par()$mai
+
+            par(pty="s")
+            par(mai=c(0.3,0.3,0.3,0.1))
+
+            image(z=comp, axes=FALSE, col=colors, zlim=c(min(comp),max(comp)),main=main )
+
+            locations.x<-seq(from=0,to=1,length=nrow(comp))
+            locations.y<-locations.x
+
+            for(m in 1:n.y){
+                for(n in 1:n.y){
+                    text(x=locations.x[m],y=locations.y[n],labels=paste(round(100*comp[m,n])), col=lab.colors[(comp[m,n]> ((max(comp)-min(comp))/2) )+1])
+                }
+            }
+
+            axis(side=1,labels=nameseq, at=seq(0,1,length=n.y), cex.axis=1, las=1)
+            axis(side=2,labels=nameseq, at=seq(0,1,length=n.y), cex.axis=1, las=3)
+            box()
+            par(pty=old.pty,mai=old.mai)
+
+        ## Numeric - Plot two densities on top of each other
+        }else if(is.numeric(y) & is.numeric(y1)){
+
+            if(is.null(col)){
+                semi.col.x <-rgb(142,229,238,150,maxColorValue=255)
+                semi.col.x1<-rgb(255,114,86,150,maxColorValue=255)
+                col<-c(semi.col.x,semi.col.x1)
+            }else if(length(col)<2){
+                col<-c(col,col)
+            }
+
+            if(ncol(as.matrix(y))>1){
+                shrink <- 0.9
+                ncoly <- ncol(y)  # Assumes columns of y match cols y1.  Should check or enforce.
+                # Precedence is names > colnames > 1:n
+                if(is.null(names(y))){
+                    if(is.null(colnames(y) )){
+                        all.names <- 1:ncoly
+                    }else{
+                        all.names <- colnames(y)
+                    }
+                }else{
+                    all.names <- names(y)
+                }
+
+                hold.dens.y <- hold.dens.y1 <- list()
+                ymax <- xmax <- xmin <- rep(0,ncoly)
+                for(i in 1:ncoly){
+                    hold.dens.y[[i]] <- density(y[,i])
+                    hold.dens.y1[[i]] <- density(y1[,i], bw=hold.dens.y[[i]]$bw)
+                    ymax[i] <- max(hold.dens.y[[i]]$y, hold.dens.y1[[i]]$y)
+                    xmax[i] <- max(hold.dens.y[[i]]$x, hold.dens.y1[[i]]$x)
+                    xmin[i] <- min(hold.dens.y[[i]]$x, hold.dens.y1[[i]]$x)
+                }
+                all.xlim <- c(min(xmin), max(xmax))
+                all.ylim <- c(0,ncoly)
+                shift <- 0:ncoly
+                for(i in 1:ncoly ){
+                    if(i<ncoly){
+                        output <- plot(hold.dens.y[[i]]$x, shrink*hold.dens.y[[i]]$y/ymax[i] + shift[i], xaxt="n", yaxt="n", xlab="", ylab="", main="", col=line.col[1], xlim=all.xlim, ylim=all.ylim, type="l")
+                        par(new=TRUE)
+                        output <- plot(hold.dens.y1[[i]]$x, shrink*hold.dens.y1[[i]]$y/ymax[i] + shift[i], xaxt="n", yaxt="n", xlab="", ylab="", main="", col=line.col[2], xlim=all.xlim, ylim=all.ylim, type="l")
+
+                        if(!identical(col[1],"n")){
+                            polygon(hold.dens.y[[i]]$x, shrink*hold.dens.y[[i]]$y/ymax[i] + shift[i], col=col[1])
+                        }
+                        if(!identical(col[2],"n")){
+                            polygon(hold.dens.y1[[i]]$x, shrink*hold.dens.y1[[i]]$y/ymax[i] + shift[i], col=col[2])
+                        }
+                        abline(h=shift[i+1])
+                        text(x=all.xlim[1], y=(shift[i] + shift[i+1])/2, labels=all.names[i], pos=4)
+                        par(new=TRUE)
+                    }else{
+                        output <- plot(hold.dens.y[[i]]$x, shrink*hold.dens.y[[i]]$y/ymax[i] + shift[i], yaxt="n", xlab=xlab, ylab=ylab, main=main, col=line.col[1], xlim=all.xlim, ylim=all.ylim, type="l")
+                        par(new=TRUE)
+                        output <- plot(hold.dens.y1[[i]]$x, shrink*hold.dens.y1[[i]]$y/ymax[i] + shift[i], yaxt="n", xlab=xlab, ylab=ylab, main=main, col=line.col[1], xlim=all.xlim, ylim=all.ylim, type="l")
+
+                        if(!identical(col[1],"n")){
+                            polygon(hold.dens.y[[i]]$x, shrink*hold.dens.y[[i]]$y/ymax[i] + shift[i], col=col[1])
+                        }
+                        if(!identical(col[2],"n")){
+                            polygon(hold.dens.y1[[i]]$x, shrink*hold.dens.y1[[i]]$y/ymax[i] + shift[i], col=col[2])
+                        }
+                        text(x=all.xlim[1], y=(shift[i] + shift[i+1])/2, labels=all.names[i], pos=4)
+                    }
+                }
+            }else{
+                den.y<-density(y)
+                den.y1<-density(y1,bw=den.y$bw)
+
+                all.xlim<-c(min(c(den.y$x,den.y1$x)),max(c(den.y$x,den.y1$x)))
+                all.ylim<-c(min(c(den.y$y,den.y1$y)),max(c(den.y$y,den.y1$y)))
+
+                output<-plot(den.y,xlab=xlab,ylab=ylab,main=main,col=col[1],xlim=all.xlim,ylim=all.ylim)
+                par(new=TRUE)
+                output<-plot(den.y1,xlab=xlab,ylab=ylab,main="",col=col[2],xlim=all.xlim,ylim=all.ylim)
+
+                if(!identical(col[1],"n")){
+                    polygon(den.y$x,den.y$y,col=col[1])
+                }
+                if(!identical(col[2],"n")){
+                    polygon(den.y1$x,den.y1$y,col=col[2])
+                }
+            }
+        }
+    }
+}
+
+
+
+
+
+
+#' Default Plot Design For Zelig Model QI's
+#'
+#' @usage qi.plot(obj, ...)
+#' @param obj A reference class zelig5 object
+#' @param ... Parameters to be passed to the `truehist' function which is
+#' implicitly called for numeric simulations
+#' @author James Honaker with panel layouts from Matt Owen
+
+qi.plot <- function (obj, ...) {
+    # Save old state
+    old.par <- par(no.readonly=T)
+
+    if(is_timeseries(obj)){
+        par(mfcol=c(3,1))
+        if(obj$bsetx & !obj$bsetx1) {
+            ## If only setx and not setx1 were called on the object
+            zeligACFplot(obj$get_qi("acf", xvalue="x"))
+        }
+        else{
+            zeligACFplot(obj$get_qi("acf", xvalue="x1"))
+        }
+        ci.plot(obj, qi="pvseries.shock")
+        ci.plot(obj, qi="pvseries.innovation")
+        return()
+    }
+
+    # Determine whether two "Expected Values" qi's exist
+         both.ev.exist <- (length(obj$sim.out$x$ev)>0) & (length(obj$sim.out$x1$ev)>0)
+    # Determine whether two "Predicted Values" qi's exist
+         both.pv.exist <- (length(obj$sim.out$x$pv)>0) & (length(obj$sim.out$x1$pv)>0)
+
+    color.x <- rgb(242, 122, 94, maxColorValue=255)
+    color.x1 <- rgb(100, 149, 237, maxColorValue=255)
+    # Interpolation of the above colors in rgb color space:
+    color.mixed <- rgb(t(round((col2rgb(color.x) + col2rgb(color.x1))/2)), maxColorValue=255)
+
+    if (! ("x" %in% names(obj$sim.out))) {
+        return(par(old.par))
+    } else if (! ("x1" %in% names(obj$sim.out))) {
+
+
+    panels <- matrix(1:2, 2, 1)
+
+        # The plotting device:
+        #
+        # +-----------+
+        # |     1     |
+        # +-----------+
+        # |     2     |
+        # +-----------+
+    } else {
+        panels <- matrix(c(1:5, 5), ncol=2, nrow=3, byrow = TRUE)
+
+        # the plotting device:
+        #
+        # +-----+-----+
+        # |  1  |  2  |
+        # +-----+-----+
+        # |  3  |  4  |
+        # +-----+-----+
+        # |     5     |
+        # +-----------+
+
+        panels <- if (xor(both.ev.exist, both.pv.exist))
+        rbind(panels, c(6, 6))
+
+        # the plotting device:
+        #
+        # +-----+-----+
+        # |  1  |  2  |
+        # +-----+-----+
+        # |  3  |  4  |
+        # +-----+-----+
+        # |     5     |
+        # +-----------+
+        # |     6     |
+        # +-----------+
+
+        else if (both.ev.exist && both.pv.exist)
+        rbind(panels, c(6, 7))
+        else
+        panels
+
+        # the plotting device:
+        #
+        # +-----+-----+
+        # |  1  |  2  |
+        # +-----+-----+
+        # |  3  |  4  |
+        # +-----+-----+
+        # |     5     |
+        # +-----+-----+
+        # |  6  |  7  |
+        # +-----+-----+
+    }
+
+    layout(panels)
+
+    titles <- obj$setx.labels
+
+    # Plot each simulation
+    if(length(obj$sim.out$x$pv)>0)
+        simulations.plot(obj$get_qi(qi="pv", xvalue="x"), main = titles$pv, col = color.x, line.col = "black")
+
+    if(length(obj$sim.out$x1$pv)>0)
+        simulations.plot(obj$get_qi(qi="pv", xvalue="x1"), main = titles$pv1, col = color.x1, line.col = "black")
+
+    if(length(obj$sim.out$x$ev)>0)
+        simulations.plot(obj$get_qi(qi="ev", xvalue="x"), main = titles$ev, col = color.x, line.col = "black")
+
+    if(length(obj$sim.out$x1$ev)>0)
+        simulations.plot(obj$get_qi(qi="ev", xvalue="x1"), main = titles$ev1, col = color.x1, line.col = "black")
+
+    if(length(obj$sim.out$x1$fd)>0)
+        simulations.plot(obj$get_qi(qi="fd", xvalue="x1"), main = titles$fd, col = color.mixed, line.col = "black")
+
+    if(both.pv.exist)
+        simulations.plot(y=obj$get_qi(qi="pv", xvalue="x"), y1=obj$get_qi(qi="pv", xvalue="x1"), main = "Comparison of Y|X and Y|X1", col = paste(c(color.x, color.x1), "80", sep=""), line.col = "black")
+
+    if(both.ev.exist)
+        simulations.plot(y=obj$get_qi(qi="ev", xvalue="x"), y1=obj$get_qi(qi="ev", xvalue="x1"), main = "Comparison of E(Y|X) and E(Y|X1)", col = paste(c(color.x, color.x1), "80", sep=""), line.col = "black")
+
+
+    # Restore old state
+    par(old.par)
+
+    # Return old parameter invisibly
+    invisible(old.par)
+}
+
+
+
+#' Method for plotting qi simulations across a range within a variable, with confidence intervals
+#'
+#' @param obj A reference class zelig5 object
+#' @param qi a character-string specifying the quantity of interest to plot
+#' @param var The variable to be used on the x-axis. Default is the variable
+#' across all the chosen values with smallest nonzero variance
+#' @param ... Parameters to be passed to the `truehist' function which is
+#' implicitly called for numeric simulations
+#' @param main a character-string specifying the main heading of the plot
+#' @param sub a character-string specifying the sub heading of the plot
+#' @param xlab a character-string specifying the label for the x-axis
+#' @param ylab a character-string specifying the label for the y-axis
+#' @param xlim Limits to the x-axis
+#' @param ylim Limits to the y-axis
+#' @param legcol ``legend color'', an valid color used for plotting the line
+#' colors in the legend
+#' @param col a valid vector of colors of at least length 3 to use to color the
+#' confidence intervals
+#' @param leg ``legend position'', an integer from 1 to 4, specifying the
+#' position of the legend. 1 to 4 correspond to ``SE'', ``SW'', ``NW'', and
+#' ``NE'' respectively.  Setting to 0 or ``n'' turns off the legend.
+#' @param legpos ``legend type'', exact coordinates and sizes for legend.
+#' Overrides argment ``leg.type''
+#' @param ci vector of length three of confidence interval levels to draw.
+#' @param discont optional point of discontinuity along the x-axis at which
+#' to interupt the graph
+#' @return the current graphical parameters. This is subject to change in future
+#' implementations of Zelig
+#' @author James Honaker
+#' @usage ci.plot(obj, qi="ev", var=NULL, ..., main = NULL, sub =
+#'  NULL, xlab = NULL, ylab = NULL, xlim = NULL, ylim =
+#'  NULL, legcol="gray20", col=NULL, leg=1, legpos=
+#'  NULL, ci = c(80, 95, 99.9), discont=NULL)
+#' @export
+
+ci.plot <- function(obj, qi = "ev", var = NULL, ..., main = NULL, sub = NULL,
+                    xlab = NULL, ylab = NULL, xlim = NULL, ylim = NULL,
+                    legcol = "gray20", col = NULL, leg = 1, legpos = NULL,
+                    ci = c(80, 95, 99.9), discont = NULL) {
+
+    is_zelig(obj)
+    if(!is_timeseries(obj)) is_simsrange(obj$sim.out)
+    msg <- 'Simulations for more than one fitted observation are required.'
+    is_length_not_1(obj$sim.out$range, msg = msg)
+    if (!is.null(obj$sim.out$range1)) {
+        is_length_not_1(obj$sim.out$range1, msg)
+        if (length(obj$sim.out$range) != length(obj$sim.out$range1))
+            stop('The two fitted data ranges are not the same length.',
+                 call. = FALSE)
+    }
+
+    ###########################
+    #### Utility Functions ####
+    # Define function to cycle over range list and extract correct qi's
+    ## CAN THESE NOW BE REPLACED WITH THE GETTER METHODS?
+
+    extract.sims <- function(obj, qi) {
+        d <- length(obj$sim.out$range)
+        k <- length(obj$sim.out$range[[1]][qi][[1]][[1]])  # THAT IS A LONG PATH THAT MAYBE SHOULD BE CHANGED
+        hold <- matrix(NA, nrow = k, ncol = d)
+        for (i in 1:d) {
+            hold[, i] <- obj$sim.out$range[[i]][qi][[1]][[1]]  # THAT IS A LONG PATH THAT MAYBE SHOULD BE CHANGED
+        }
+        return(hold)
+    }
+
+    extract.sims1 <- function(obj, qi) {
+        # Should find better architecture for alternate range sims
+        d <- length(obj$sim.out$range1)
+        k <- length(obj$sim.out$range1[[1]][qi][[1]][[1]])  # THAT IS A LONG PATH THAT MAYBE SHOULD BE CHANGED
+        hold <- matrix(NA, nrow = k, ncol = d)
+        for (i in 1:d) {
+            hold[, i] <- obj$sim.out$range1[[i]][qi][[1]][[1]]  # THAT IS A LONG PATH THAT MAYBE SHOULD BE CHANGED
+        }
+        return(hold)
+    }
+
+    # Define functions to compute confidence intervals CAN WE MERGE THESE TOGETHER SO AS NOT TO
+    # HAVE TO SORT TWICE?
+    ci.upper <- function(x, alpha) {
+        pos <- max(round((1 - (alpha/100)) * length(x)), 1)
+        return(sort(x)[pos])
+    }
+
+    ci.lower <- function(x, alpha) {
+        pos <- max(round((alpha/100) * length(x)), 1)
+        return(sort(x)[pos])
+    }
+
+    ###########################
+
+    if(length(ci)<3){
+        ci<-rep(ci,3)
+    }
+    if(length(ci)>3){
+        ci<-ci[1:3]
+    }
+    ci<-sort(ci)
+
+    ## Timeseries:
+    if(is_timeseries(obj)){
+        #xmatrix<-              ## Do we need to know the x in which the shock/innovation occcured?  For secondary graphs, titles, legends?
+        xname <- "Time"
+        qiseries <- c("pvseries.shock","pvseries.innovation","evseries.shock","evseries.innovation")
+        if (!qi %in% qiseries){
+            cat(paste("Error: For Timeseries models, argument qi must be one of ", paste(qiseries, collapse=" or ") ,".\n", sep="") )
+            return()
+        }
+        if (obj$bsetx & !obj$bsetx1) {
+            ## If setx has been called and setx1 has not been called
+            ev<-t( obj$get_qi(qi=qi, xvalue="x") ) # NOTE THE NECESSARY TRANSPOSE.  Should we more clearly standardize this?
+        } else {
+            ev<-t( obj$get_qi(qi=qi, xvalue="x1") )   # NOTE THE NECESSARY TRANSPOSE.  Should we more clearly standardize this?
+        }
+        d<-ncol(ev)
+        xseq<-1:d
+        ev1 <- NULL  # Maybe want to add ability to overlay another graph?
+
+        # Define xlabel
+        if (is.null(xlab))
+        xlab <- xname
+        if (is.null(ylab)){
+            if(qi %in% c("pvseries.shock", "pvseries.innovation"))
+                ylab<- as.character(obj$setx.labels["pv"])
+            if(qi %in% c("evseries.shock", "evseries.innovation"))
+                ylab<- as.character(obj$setx.labels["ev"])
+        }
+
+        if (is.null(main))
+        main <- as.character(obj$setx.labels[qi])
+        if (is.null(discont))
+        discont <- 22.5    # NEED TO SET AUTOMATICALLY
+
+    ## Everything Else:
+    }else{
+        d <- length(obj$sim.out$range)
+
+        if (d < 1) {
+            return()  # Should add warning
+        }
+        num_cols <- length(obj$setx.out$range[[1]]$mm[[1]] )
+        xmatrix <- matrix(NA,nrow = d, ncol = num_cols)    # THAT IS A LONG PATH THAT MAYBE SHOULD BE CHANGED
+        for (i in 1:d){
+            xmatrix[i,] <- matrix(obj$setx.out$range[[i]]$mm[[1]],
+                                  ncol = num_cols)   # THAT IS A LONG PATH THAT MAYBE SHOULD BE CHANGED
+        }
+
+        if (d == 1 && is.null(var)) {
+            warning("Must specify the `var` parameter when plotting the confidence interval of an unvarying model. Plotting nothing.")
+            return(invisible(FALSE))
+        }
+
+        xvarnames <- names(as.data.frame( obj$setx.out$range[[1]]$mm[[1]]))  # MUST BE A BETTER WAY/PATH TO GET NAMES
+
+        if(is.character(var)){
+            if( !(var %in% xvarnames   ) ){
+                warning("Specified variable for confidence interval plot is not in estimated model.  Plotting nothing.")
+                return(invisible(FALSE))
+            }
+        }
+
+        if (is.null(var)) {
+            # Determine x-axis variable based on variable with unique fitted values equal to the number of scenarios
+            length_unique <- function(x) length(unique(x))
+            var.unique <- apply(xmatrix, 2, length_unique)
+            var.seq <- 1:ncol(xmatrix)
+            position <- var.seq[var.unique == d]
+            if (length(position) > 1) {
+                position <- position[1] # arbitrarily pick the first variable if more than one
+                message(sprintf('%s chosen as the x-axis variable. Use the var argument to specify a different variable.', xvarnames[position]))
+            }
+        } else {
+            if(is.numeric(var)){
+                position <- var
+            }else if(is.character(var)){
+                position <- grep(var,xvarnames)
+            }
+        }
+        position <- min(position)
+        xseq <- xmatrix[,position]
+        xname <- xvarnames[position]
+        # Define xlabel
+        if (is.null(xlab))
+        xlab <- paste("Range of",xname)
+
+        # Use "qi" argument to select quantities of interest and set labels
+        ev1<-NULL
+        if(!is.null(obj$sim.out$range1)){
+            ev1<-extract.sims1(obj,qi=qi)
+        }
+        ev<-extract.sims(obj,qi=qi)
+        if (is.null(ylab)){
+            ylab <- as.character(obj$setx.labels[qi])
+        }
+
+    }
+
+
+
+
+    #
+    k<-ncol(ev)
+    n<-nrow(ev)
+
+    #
+    if(is.null(col)){
+        myblue1<-rgb( 100, 149, 237, alpha=50, maxColorValue=255)
+        myblue2<-rgb( 152, 245, 255, alpha=50, maxColorValue=255)
+        myblue3<-rgb( 191, 239, 255, alpha=70, maxColorValue=255)
+        myred1 <-rgb( 237, 149, 100, alpha=50, maxColorValue=255)
+        myred2 <-rgb( 255, 245, 152, alpha=50, maxColorValue=255)
+        myred3 <-rgb( 255, 239, 191, alpha=70, maxColorValue=255)
+
+        col<-c(myblue1,myblue2,myblue3,myred1,myred2,myred3)
+    }else{
+        if(length(col)<6){
+            col<-rep(col,6)[1:6]
+        }
+    }
+
+    # Define function to numerically extract summaries of distributions from set of all simulated qi's
+    form.history <- function (k,xseq,results,ci=c(80,95,99.9)){
+
+        history<-matrix(NA, nrow=k,ncol=8)
+        for (i in 1:k) {
+            v <- c(
+            xseq[i],
+            median(results[,i]),
+
+            ci.upper(results[,i],ci[1]),
+            ci.lower(results[,i],ci[1]),
+
+            ci.upper(results[,i],ci[2]),
+            ci.lower(results[,i],ci[2]),
+
+            ci.upper(results[,i],ci[3]),
+            ci.lower(results[,i],ci[3])
+            )
+
+            history[i, ] <- v
+        }
+        if (k == 1) {
+            left <- c(
+            xseq[1]-.5,
+            median(results[,1]),
+
+            ci.upper(results[,1],ci[1]),
+            ci.lower(results[,1],ci[1]),
+
+            ci.upper(results[,1],ci[2]),
+            ci.lower(results[,1],ci[2]),
+
+            ci.upper(results[,1],ci[3]),
+            ci.lower(results[,1],ci[3])
+            )
+            right <- c(
+            xseq[1]+.5,
+            median(results[,1]),
+
+            ci.upper(results[,1],ci[1]),
+            ci.lower(results[,1],ci[1]),
+
+            ci.upper(results[,1],ci[2]),
+            ci.lower(results[,1],ci[2]),
+
+            ci.upper(results[,1],ci[3]),
+            ci.lower(results[,1],ci[3])
+            )
+            v <- c(
+            xseq[1],
+            median(results[,1]),
+
+            ci.upper(results[,1],ci[1]),
+            ci.lower(results[,1],ci[1]),
+
+            ci.upper(results[,1],ci[2]),
+            ci.lower(results[,1],ci[2]),
+
+            ci.upper(results[,1],ci[3]),
+            ci.lower(results[,1],ci[3])
+            )
+            history <- rbind(left, v, right)
+        }
+
+        return(history)
+    }
+
+    history<-  form.history(k,xseq,ev,ci)
+    if(!is.null(ev1)){
+        history1<- form.history(k,xseq,ev1,ci)
+    }else{
+        history1<-NULL
+    }
+
+    # This is for small sets that have been duplicated so as to have observable volume
+    if(k==1){
+        k<-3
+    }
+
+    # Specify x-axis length
+    all.xlim <- if (is.null(xlim))
+    c(min(c(history[, 1],history1[, 1])),max(c(history[, 1],history1[, 1])))
+    else
+    xlim
+
+
+    # Specify y-axis length
+    all.ylim <-if (is.null(ylim))
+    c(min(c(history[, -1],history1[, -1])),max(c(history[, -1],history1[, -1])))
+    else
+    ylim
+
+
+    # Define y label
+    if (is.null(ylab))
+    ylab <- "Expected Values: E(Y|X)"
+
+
+    ## This is the plot
+
+    par(bty="n")
+    centralx<-history[,1]
+    centraly<-history[,2]
+
+
+    if(is.null(discont)){
+        gotok <- k
+    }else{
+        gotok <- sum(xseq < discont)
+        if((gotok<2) | (gotok>(k-2))){
+            cat("Warning: Discontinuity is located at edge or outside the range of x-axis.\n")
+            gotok<-k
+            discont<-NULL
+        }
+        if(gotok<k){
+            gotokp1<- gotok+1
+            centralx<-c(centralx[1:gotok], NA, centralx[gotok+1:length(centralx)])
+            centraly<-c(centraly[1:gotok], NA, centraly[gotok+1:length(centraly)])
+        }
+    }
+
+    plot(x=centralx, y=centraly, type="l", xlim=all.xlim, ylim=all.ylim, main = main, sub = sub, xlab=xlab, ylab=ylab)
+
+    polygon(c(history[1:gotok,1],history[gotok:1,1]),c(history[1:gotok,7],history[gotok:1,8]),col=col[3],border="white")
+    polygon(c(history[1:gotok,1],history[gotok:1,1]),c(history[1:gotok,5],history[gotok:1,6]),col=col[2],border="gray90")
+    polygon(c(history[1:gotok,1],history[gotok:1,1]),c(history[1:gotok,3],history[gotok:1,4]),col=col[1],border="gray60")
+    polygon(c(history[1:gotok,1],history[gotok:1,1]),c(history[1:gotok,7],history[gotok:1,8]),col=NA,border="white")
+
+    if(!is.null(discont)){
+        polygon(c(history[gotokp1:k,1],history[k:gotokp1,1]),c(history[gotokp1:k,7],history[k:gotokp1,8]),col=col[3],border="white")
+        polygon(c(history[gotokp1:k,1],history[k:gotokp1,1]),c(history[gotokp1:k,5],history[k:gotokp1,6]),col=col[2],border="gray90")
+        polygon(c(history[gotokp1:k,1],history[k:gotokp1,1]),c(history[gotokp1:k,3],history[k:gotokp1,4]),col=col[1],border="gray60")
+        polygon(c(history[gotokp1:k,1],history[k:gotokp1,1]),c(history[gotokp1:k,7],history[k:gotokp1,8]),col=NA,border="white")
+        abline(v=discont, lty=5, col="grey85")
+    }
+
+    if(!is.null(ev1)){
+
+        lines(x=history1[1:gotok, 1], y=history1[1:gotok, 2], type="l")
+        if(!is.null(discont)){
+            lines(x=history1[gotokp1:k, 1], y=history1[gotokp1:k, 2], type="l")
+        }
+
+        polygon(c(history1[1:gotok,1],history1[gotok:1,1]),c(history1[1:gotok,7],history1[gotok:1,8]),col=col[6],border="white")
+        polygon(c(history1[1:gotok,1],history1[gotok:1,1]),c(history1[1:gotok,5],history1[gotok:1,6]),col=col[5],border="gray90")
+        polygon(c(history1[1:gotok,1],history1[gotok:1,1]),c(history1[1:gotok,3],history1[gotok:1,4]),col=col[4],border="gray60")
+        polygon(c(history1[1:gotok,1],history1[gotok:1,1]),c(history1[1:gotok,7],history1[gotok:1,8]),col=NA,border="white")
+
+        if(!is.null(discont)){
+            polygon(c(history1[gotokp1:k,1],history1[k:gotokp1,1]),c(history1[gotokp1:k,7],history1[k:gotokp1,8]),col=col[6],border="white")
+            polygon(c(history1[gotokp1:k,1],history1[k:gotokp1,1]),c(history1[gotokp1:k,5],history1[k:gotokp1,6]),col=col[5],border="gray90")
+            polygon(c(history1[gotokp1:k,1],history1[k:gotokp1,1]),c(history1[gotokp1:k,3],history1[k:gotokp1,4]),col=col[4],border="gray60")
+            polygon(c(history1[gotokp1:k,1],history1[k:gotokp1,1]),c(history1[gotokp1:k,7],history1[k:gotokp1,8]),col=NA,border="white")
+        }
+    }
+
+    ## This is the legend
+    if((leg != "n") & (leg != 0)){
+
+        if(is.null(legpos)){
+            if(leg==1){
+                legpos<-c(.91,.04,.2,.05)
+            }else if(leg==2){
+                legpos<-c(.09,.04,.2,.05)
+            }else if(leg==3){
+                legpos<-c(.09,.04,.8,.05)
+            }else{
+                legpos<-c(.91,.04,.8,.05)
+            }
+        }
+
+        lx<-min(all.xlim)+ legpos[1]*(max(all.xlim)- min(all.xlim))
+        hx<-min(all.xlim)+ (legpos[1]+legpos[2])*(max(all.xlim)- min(all.xlim))
+
+        deltax<-(hx-lx)*.1
+
+        my<-min(all.ylim) +legpos[3]*min(max(all.ylim) - min(all.ylim))
+        dy<-legpos[4]*(max(all.ylim) - min(all.ylim))
+
+
+        lines(c(hx+deltax,hx+2*deltax,hx+2*deltax,hx+deltax),c(my+3*dy,my+3*dy,my-3*dy,my-3*dy),col=legcol)
+        lines(c(hx+3*deltax,hx+4*deltax,hx+4*deltax,hx+3*deltax),c(my+1*dy,my+1*dy,my-1*dy,my-1*dy),col=legcol)
+        lines(c(lx-deltax,lx-2*deltax,lx-2*deltax,lx-deltax),c(my+2*dy,my+2*dy,my-2*dy,my-2*dy),col=legcol)
+        lines(c(lx-5*deltax,lx),c(my,my),col="white",lwd=3)
+        lines(c(lx-5*deltax,lx),c(my,my),col=legcol)
+        lines(c(lx,hx),c(my,my))
+
+        polygon(c(lx,lx,hx,hx),c(my-3*dy,my+3*dy,my+3*dy,my-3*dy),col=col[3],border="white")
+        polygon(c(lx,lx,hx,hx),c(my-2*dy,my+2*dy,my+2*dy,my-2*dy),col=col[2],border="gray90")
+        polygon(c(lx,lx,hx,hx),c(my-1*dy,my+1*dy,my+1*dy,my-1*dy),col=col[1],border="gray60")
+        polygon(c(lx,lx,hx,hx),c(my-3*dy,my+3*dy,my+3*dy,my-3*dy),col=NA,border="white")
+
+        text(lx,my,labels="median",pos=2,cex=0.5,col=legcol)
+        text(lx,my+2*dy,labels=paste("ci",ci[2],sep=""),pos=2,cex=0.5,col=legcol)
+        text(hx,my+1*dy,labels=paste("ci",ci[1],sep=""),pos=4,cex=0.5,col=legcol)
+        text(hx,my+3*dy,labels=paste("ci",ci[3],sep=""),pos=4,cex=0.5,col=legcol)
+    }
+
+}
+
+#' Receiver Operator Characteristic Plots
+#'
+#' The 'rocplot' command generates a receiver operator characteristic plot to
+#' compare the in-sample (default) or out-of-sample fit for two logit or probit
+#' regressions.
+#'
+#' @usage
+#' rocplot(z1, z2,
+#' cutoff = seq(from=0, to=1, length=100), lty1="solid",
+#' lty2="dashed", lwd1=par("lwd"), lwd2=par("lwd"),
+#' col1=par("col"), col2=par("col"),
+#' main="ROC Curve",
+#' xlab = "Proportion of 1's Correctly Predicted",
+#' ylab="Proportion of 0's Correctly Predicted",
+#' plot = TRUE,
+#' ...
+#' )
+#'
+#' @param z1 first model
+#' @param z2 second model
+#' @param cutoff A vector of cut-off values between 0 and 1, at which to
+#'   evaluate the proportion of 0s and 1s correctly predicted by the first and
+#'   second model.  By default, this is 100 increments between 0 and 1
+#'   inclusive
+#' @param lty1 the line type of the first model (defaults to 'line')
+#' @param lty2 the line type of the second model (defaults to 'dashed')
+#' @param lwd1 the line width of the first model (defaults to 1)
+#' @param lwd2 the line width of the second model (defaults to 1)
+#' @param col1 the color of the first model (defaults to 'black')
+#' @param col2 the color of the second model (defaults to 'black')
+#' @param main a title for the plot (defaults to "ROC Curve")
+#' @param xlab a label for the X-axis
+#' @param ylab a lavel for the Y-axis
+#' @param plot whether to generate a plot to the selected device
+#' @param \dots additional parameters to be passed to the plot
+#' @return if plot is TRUE, rocplot simply generates a plot. Otherwise, a list
+#'   with the following is produced:
+#'   \item{roc1}{a matrix containing a vector of x-coordinates and
+#'     y-coordinates corresponding to the number of ones and zeros correctly
+#'     predicted for the first model.}
+#'   \item{roc2}{a matrix containing a vector of x-coordinates and
+#'     y-coordinates corresponding to the number of ones and zeros correctly
+#'     predicted for the second model.}
+#'   \item{area1}{the area under the first ROC curve, calculated using
+#'     Reimann sums.}
+#'   \item{area2}{the area under the second ROC curve, calculated using
+#'     Reimann sums.}
+#' @export
+#" @author Kosuke Imai and Olivia Lau
+rocplot <- function(z1, z2,
+                    cutoff = seq(from=0, to=1, length=100), lty1="solid",
+                    lty2="dashed", lwd1=par("lwd"), lwd2=par("lwd"),
+                    col1=par("col"), col2=par("col"),
+                    main="ROC Curve",
+                    xlab = "Proportion of 1's Correctly Predicted",
+                    ylab="Proportion of 0's Correctly Predicted",
+                    plot = TRUE,
+                    ...) {
+  y1 <- z1$data[as.character(z1$formula[[2]])]
+  y2 <- z2$data[as.character(z2$formula[[2]])]
+  fitted1 <- fitted(z1)[[1]]
+  fitted2 <- fitted(z2)[[1]]
+  roc1 <- roc2 <- matrix(NA, nrow = length(cutoff), ncol = 2)
+  colnames(roc1) <- colnames(roc2) <- c("ones", "zeros")
+  for (i in 1:length(cutoff)) {
+    roc1[i,1] <- mean(fitted1[y1==1] >= cutoff[i])
+    roc2[i,1] <- mean(fitted2[y2==1] >= cutoff[i])
+    roc1[i,2] <- mean(fitted1[y1==0] < cutoff[i])
+    roc2[i,2] <- mean(fitted2[y2==0] < cutoff[i])
+  }
+  if (plot) {
+    plot(0:1, 0:1, type = "n", xaxs = "i", yaxs = "i",
+         main=main, xlab=xlab, ylab=ylab, ...)
+    lines(roc1, lty = lty1, lwd = lwd1, col=col1)
+    lines(roc2, lty = lty2, lwd = lwd2, col=col2)
+    abline(1, -1, lty = "dotted")
+  }
+  else {
+    area1 <- area2 <- array()
+    for (i in 2:length(cutoff)) {
+      area1[i-1] <- (roc1[i,2] - roc1[(i-1),2]) * roc1[i,1]
+      area2[i-1] <- (roc2[i,2] - roc2[(i-1),2]) * roc2[i,1]
+    }
+    return(list(roc1 = roc1,
+                roc2 = roc2,
+                area1 = sum(na.omit(area1)),
+                area2 = sum(na.omit(area2))))
+  }
+}
+
+
+#' Plot Autocorrelation Function from Zelig QI object
+#' @keywords internal
+
+
+zeligACFplot <- function(z, omitzero=FALSE,  barcol="black", epsilon=0.1, col=NULL, main="Autocorrelation Function", xlab="Period", ylab="Correlation of Present Shock with Future Outcomes", ylim=NULL, ...){
+
+    x <- z$expected.acf
+    ci.x <- z$ci.acf
+
+    if(omitzero){
+        x<-x[2:length(x)]
+        ci.x$ci.upper <- ci.x$ci.upper[2:length(ci.x$ci.upper)]
+        ci.x$ci.lower <- ci.x$ci.lower[2:length(ci.x$ci.lower)]
+    }
+
+    if(is.null(ylim)){
+        ylim<-c(min( c(ci.x$ci.lower, 0, x) ), max( c(ci.x$ci.upper, 0 , x) ))
+
+    }
+    if(is.null(col)){
+        col <- rgb(100,149,237,maxColorValue=255)
+    }
+
+    bout <- barplot(x, col=col, main=main, xlab=xlab, ylab=ylab, ylim=ylim, ...)
+
+    n <- length(x)
+    xseq <- as.vector(bout)
+    NAseq <- rep(NA, n)
+
+    xtemp <- cbind( xseq-epsilon, xseq+epsilon, NAseq)
+    xtemp <- as.vector(t(xtemp))
+    ytemp <- cbind(ci.x$ci.upper, ci.x$ci.upper, NAseq)
+    ytemp <- as.vector(t(ytemp))
+    lines(x=xtemp ,y=ytemp, col=barcol)
+
+    ytemp <- cbind(ci.x$ci.lower, ci.x$ci.lower, NAseq)
+    ytemp <- as.vector(t(ytemp))
+    lines(x=xtemp ,y=ytemp, col=barcol)
+
+    xtemp <- cbind( xseq, xseq, NAseq)
+    xtemp <- as.vector(t(xtemp))
+    ytemp <- cbind(ci.x$ci.upper, ci.x$ci.lower, NAseq)
+    ytemp <- as.vector(t(ytemp))
+    lines(x=xtemp ,y=ytemp, col=barcol)
+}
diff --git a/R/utils.R b/R/utils.R
new file mode 100755
index 0000000..1453dae
--- /dev/null
+++ b/R/utils.R
@@ -0,0 +1,703 @@
+#' Compute the Statistical Mode of a Vector
+#' @aliases Mode mode
+#' @param x a vector of numeric, factor, or ordered values
+#' @return the statistical mode of the vector. If more than one mode exists,
+#'  the last one in the factor order is arbitrarily chosen (by design)
+#' @export
+#' @author Christopher Gandrud and Matt Owen
+
+Mode <- function (x) {
+    # build a table of values of x
+    tab <- table(as.factor(x))
+    # find the mode, if there is more than one arbitrarily pick the last
+    max_tab <- names(which(tab == max(tab)))
+    v <- max_tab[length(max_tab)]
+    # if it came in as a factor, we need to re-cast it as a factor, with the same exact levels
+    if (is.factor(x))
+        return(factor(v, levels = levels(x)))
+    # re-cast as any other data-type
+    as(v, class(x))
+}
+
+## Zelig 3 and 4 backward compatibility
+## This enables backward compatibility, but results in a warning when library attached
+# mode <- Mode
+
+#' Compute the Statistical Median of a Vector
+#' @param x a vector of numeric or ordered values
+#' @param na.rm ignored
+#' @return the median of the vector
+#' @export
+#' @author Matt Owen
+
+Median <- function (x, na.rm=NULL) {
+    v <- ifelse(is.numeric(x),
+                median(x),
+                levels(x)[ceiling(median(as.numeric(x)))]
+    )
+    if (is.ordered(x))
+        v <- factor(v, levels(x))
+    v
+}
+
+#' Create a table, but ensure that the correct
+#' columns exist. In particular, this allows for
+#' entires with zero as a value, which is not
+#' the default for standard tables
+#' @param x a vector
+#' @param levels a vector of levels
+#' @param ... parameters for table
+#' @return a table
+#' @author Matt Owen
+
+table.levels <- function (x, levels, ...) {
+    # if levels are not explicitly set, then
+    # search inside of x
+    if (missing(levels)) {
+        levels <- attr(x, 'levels')
+        table(factor(x, levels=levels), ...)
+    }
+    # otherwise just do the normal thing
+    else {
+        table(factor(x, levels=levels), ...)
+    }
+}
+
+#' Compute central tendancy as approrpriate to data type
+#' @param val a vector of values
+#' @return a mean (if numeric) or a median (if ordered) or mode (otherwise)
+#' @export
+
+avg <- function(val) {
+    if (is.numeric(val))
+        mean(val)
+    else if (is.ordered(val))
+        Median(val)
+    else
+        Mode(val)
+}
+
+#' Set new value of a factor variable, checking for existing levels
+#' @param fv factor variable
+#' @param v value
+#' @return a factor variable with a value \code{val} and the same levels
+#' @keywords internal
+setfactor <- function (fv, v) {
+    lev <- levels(fv)
+    if (!v %in% lev)
+        stop("Wrong factor")
+    return(factor(v, levels = lev))
+}
+
+#' Set new value of a variable as approrpriate to data type
+#' @param val old value
+#' @param newval new value
+#' @return a variable of the same type with a value \code{val}
+#' @keywords internal
+setval <- function(val, newval) {
+    if (is.numeric(val))
+        newval
+    else if (is.ordered(val))
+        newval
+    else if (is.logical(val))
+        newval
+    else {
+        lev <- levels(val)
+        if (!newval %in% lev)
+            stop("Wrong factor", call. = FALSE)
+        return(factor(newval, levels = lev))
+    }
+}
+
+
+#' Calculate the reduced dataset to be used in \code{\link{setx}}
+#'
+#' #' This method is used internally
+#'
+#' @param dataset Zelig object data, possibly split to deal with \code{by}
+#'   argument
+#' @param s list of variables and their tentative \code{setx} values
+#' @param formula a simplified version of the Zelig object formula (typically
+#'   with 1 on the lhs)
+#' @param data Zelig object data
+#' @param avg function of data transformations
+#' @return a list of all the model variables either at their central tendancy or
+#'   their \code{setx} value
+#'
+#' @keywords internal
+#' @author Christine Choirat and Christopher Gandrud
+#' @export
+
+reduce = function(dataset, s, formula, data, avg = avg) {
+    pred <- try(terms(fit <- lm(formula, data), "predvars"), silent = TRUE)
+    if ("try-error" %in% class(pred)) # exp and weibull
+        pred <- try(terms(fit <- survreg(formula, data), "predvars"),
+                    silent = TRUE)
+
+    dataset <- model.frame(fit)
+
+    ldata <- lapply(dataset, avg)
+    if (length(s) > 0) {
+        n <- union(as.character(attr(pred, "predvars"))[-1], names(dataset))
+        if (is.list(s[[1]])) s <- s[[1]]
+        m <- match(names(s), n)
+        ma <- m[!is.na(m)]
+        if (!all(complete.cases(m))) {
+            w <- paste("Variable '", names(s[is.na(m)]), "' not in data set.\n",
+                       sep = "")
+            stop(w, call. = FALSE)
+        }
+        for (i in seq(n[ma])) {
+            ldata[n[ma]][i][[1]] <- setval(dataset[n[ma]][i][[1]],
+                                           s[n[ma]][i][[1]])
+        }
+    }
+    return(ldata)
+}
+
+
+
+#' Create QI summary matrix
+#' @param qi quantity of interest in the discrete case
+#' @return a formatted qi
+#' @keywords internal
+#' @author Christine Choirat
+statmat <- function(qi) {
+    if (!is.matrix(qi))
+        qi <- as.matrix(qi, ncol = 1)
+    m <- t(apply(qi, 2, quantile, c(.5, .025, .975), na.rm = TRUE))
+    n <- matrix(apply(qi, 2, mean, na.rm = TRUE))
+    colnames(n) <- "mean"
+    o <- matrix(apply(qi, 2, sd, na.rm = TRUE))
+    colnames(o) <- "sd"
+    p <- cbind(n, o, m)
+    return(p)
+}
+
+#' Describe Here
+#' @param qi quantity of interest in the discrete case
+#' @param num number of simulations
+#' @return a formatted quantity of interest
+#' @keywords internal
+#' @author Christine Choirat
+statlevel <- function(qi, num) {
+    if (is.matrix(qi)){
+        #m <- t(apply(qi, 2, table)) / num
+        all.levels <- levels(qi)
+        m <- t(apply(qi, 2, function(x)
+            table(factor(x, levels=all.levels)))) / num
+    } else {
+        m <- table(qi) / num
+    }
+    return(m)
+}
+
+#' Pass Quantities of Interest to Appropriate Summary Function
+#'
+#' @param qi quantity of interest (e.g., estimated value or predicted value)
+#' @param num number of simulations
+#' @return a formatted qi
+#' @keywords internal
+#' @author Christine Choirat
+stat <- function(qi, num) {
+    if (is.null(attr(qi, "levels")))
+        return(statmat(qi))
+    else
+        return(statlevel(qi, num))
+}
+
+#' Generate Formulae that Consider Clustering
+#'
+#' This method is used internally by the "Zelig" Package to interpret
+#' clustering in GEE models.
+#' @param formula a formula object
+#' @param cluster a vector
+#' @return a formula object describing clustering
+cluster.formula <- function (formula, cluster) {
+    # Convert LHS of formula to a string
+    lhs <- deparse(formula[[2]])
+    cluster.part <- if (is.null(cluster))
+        # NULL values require
+        sprintf("cluster(1:nrow(%s))", lhs)
+    else
+        # Otherwise we trust user input
+        sprintf("cluster(%s)", cluster)
+    update(formula, paste(". ~ .", cluster.part, sep = " + "))
+}
+
+
+#' Zelig Copy of plyr::mutate to avoid namespace conflict with dplyr
+#'
+#' @source Hadley Wickham (2011). The Split-Apply-Combine Strategy for Data
+#' Analysis. Journal of Statistical Software, 40(1), 1-29. URL
+#' \url{http://www.jstatsoft.org/v40/i01/}.
+#' @keywords internal
+zelig_mutate <- function (.data, ...)
+{
+    stopifnot(is.data.frame(.data) || is.list(.data) || is.environment(.data))
+    cols <- as.list(substitute(list(...))[-1])
+    cols <- cols[names(cols) != ""]
+    for (col in names(cols)) {
+        .data[[col]] <- eval(cols[[col]], .data, parent.frame())
+    }
+    .data
+}
+
+#' Convenience function for setrange and setrange1
+#'
+#' @param x data passed to setrange or setrange1
+#' @keywords internal
+
+expand_grid_setrange <- function(x) {
+    #    m <- expand.grid(x)
+    set_lengths <- unlist(lapply(x, length))
+    unique_set_lengths <- unique(as.vector(set_lengths))
+
+    m <- data.frame()
+    for (i in unique_set_lengths) {
+        temp_df <- data.frame(row.names = 1:i)
+        for (u in 1:length(x)) {
+            if (length(x[[u]]) == i) {
+                temp_df <- cbind(temp_df, x[[u]])
+                names(temp_df)[ncol(temp_df)] <- names(x)[u]
+            }
+        }
+        if (nrow(m) == 0) m <- temp_df
+        else m <- merge(m, temp_df)
+    }
+    if (nrow(m) == 1)
+        warning('Only one fitted observation provided to setrange.\nConsider using setx instead.',
+                call. = FALSE)
+    return(m)
+}
+
+#' Bundle Multiply Imputed Data Sets into an Object for Zelig
+#'
+#' This object prepares multiply imputed data sets so they can be used by
+#'   \code{zelig}.
+#' @note This function creates a list of \code{data.frame} objects, which
+#'   resembles the storage of imputed data sets in the \code{amelia} object.
+#' @param ... a set of \code{data.frame}'s or a single list of \code{data.frame}'s
+#' @return an \code{mi} object composed of a list of data frames.
+#'
+#' @author Matt Owen, James Honaker, and Christopher Gandrud
+#'
+#' @examples
+#' # create datasets
+#' n <- 100
+#' x1 <- runif(n)
+#' x2 <- runif(n)
+#' y <- rnorm(n)
+#' data.1 <- data.frame(y = y, x = x1)
+#' data.2 <- data.frame(y = y, x = x2)
+#'
+#' # merge datasets into one object as if imputed datasets
+#'
+#' mi.out <- to_zelig_mi(data.1, data.2)
+#'
+#' # pass object in place of data argument
+#' z.out <- zelig(y ~ x, model = "ls", data = mi.out)
+#' @export
+
+to_zelig_mi <- function (...) {
+
+    # Get arguments as list
+    imputations <- list(...)
+
+    # If user passes a list of data.frames rather than several data.frames as separate arguments
+    if((class(imputations[[1]]) == 'list') & (length(imputations) == 1)){
+        imputations = imputations[[1]]
+    }
+
+    # Labelling
+    names(imputations) <- paste0("imp", 1:length(imputations))
+
+    # Ensure that everything is a data.frame
+    for (k in length(imputations):1) {
+        if (!is.data.frame(imputations[[k]])){
+            imputations[[k]] <- NULL
+            warning("Item ", k, " of the provided objects is not a data.frame and will be ignored.\n")
+        }
+    }
+
+    if(length(imputations) < 1){
+        stop("The resulting object contains no data.frames, and as such is not a valid multiple imputation object.",
+             call. = FALSE)
+    }
+    if(length(imputations) < 2){
+        stop("The resulting object contains only one data.frame, and as such is not a valid multiple imputation object.",
+             call. = FALSE)
+    }
+    class(imputations) <-c("mi", "list")
+
+    return(imputations)
+}
+
+#' Enables backwards compatability for preparing non-amelia imputed data sets
+#' for \code{zelig}.
+#'
+#' See \code{\link{to_zelig_mi}}
+#'
+#' @param ... a set of \code{data.frame}'s
+#' @return an \code{mi} object composed of a list of data frames.
+mi <- to_zelig_mi
+
+#' Conduct variable transformations called inside a \code{zelig} call
+#'
+#' @param formula model formulae
+#' @param data data frame used in \code{formula}
+#' @param FUN character string of the transformation function. Currently
+#'   supports \code{factor} and \code{log}.
+#' @param check logical whether to just check if a formula contains an
+#'   internally called transformation and return \code{TRUE} or \code{FALSE}
+#' @param f_out logical whether to return the converted formula
+#' @param d_out logical whether to return the converted data frame. Note:
+#'   \code{f_out} must be missing
+#'
+#' @author Christopher Gandrud
+#' @keywords internal
+
+transformer <- function(formula, data, FUN = 'log', check, f_out, d_out) {
+
+    if (!missing(data)) {
+        if (is.data.frame(data))
+            is_df <- TRUE
+        else if (!is.data.frame(data) & is.list(data))
+            is_df <- FALSE
+        else
+            stop('data must be either a data.frame or a list', call. = FALSE)
+    }
+
+    if (FUN == 'as.factor') FUN_temp <- 'as\\.factor'
+    else FUN_temp <- FUN
+    FUN_str <- sprintf('%s.*\\(', FUN_temp)
+
+    f <- as.character(formula)[3]
+    f_split <- unlist(strsplit(f, split = '\\+'))
+    to_transform <- grep(pattern = FUN_str, f_split)
+
+    if (!missing(check)) {
+        if (length(to_transform) > 0) return(TRUE)
+        else return(FALSE)
+    }
+
+    if (length(to_transform) > 0) {
+        to_transform_raw <- trimws(f_split[to_transform])
+        if (FUN == 'factor')
+            to_transform_raw <- gsub('^as\\.', '', to_transform_raw)
+        to_transform_plain_args <- gsub(FUN_str, '', to_transform_raw)
+        to_transform_plain <- gsub(',\\(.*)', '', to_transform_plain_args)
+        to_transform_plain <- gsub('\\)', '', to_transform_plain)
+        to_transform_plain <- trimws(gsub(',.*', '', to_transform_plain))
+
+        if (is_df)
+            not_in_data <- !all(to_transform_plain %in% names(data))
+        else if (!isTRUE(is_df))
+            not_in_data <- !all(to_transform_plain %in% names(data[[1]]))
+        if (not_in_data) stop('Unable to find variable to transform.')
+
+        if (!missing(f_out)) {
+            f_split[to_transform] <- to_transform_plain
+            rhs <- paste(f_split, collapse = ' + ')
+            lhs <- gsub('\\(\\)', '', formula[2])
+            f_new <- paste(lhs, '~', rhs)
+            f_out <- as.Formula(f_new)
+            return(f_out)
+        }
+        else if (!missing(d_out)) {
+
+            transformer_fun <- trimws(gsub('\\(.*', '', to_transform_raw))
+
+            transformer_args_str <- gsub('\\)', '', to_transform_plain_args)
+            transformer_args_list <- list()
+            for (i in seq_along(transformer_args_str)) {
+                args_temp <- unlist(strsplit(gsub(' ', '' ,
+                                                  transformer_args_str[i]), ','))
+                if (is_df)
+                    args_temp[1] <- sprintf('data[, "%s"]', args_temp[1])
+                else if (!isTRUE(is_df))
+                    args_temp[1] <- sprintf('data[[h]][, "%s"]', args_temp[1])
+                arg_names <- gsub('\\=.*', '', args_temp)
+                arg_names[1] <- 'x'
+                args_temp <- gsub('.*\\=', '', args_temp)
+
+                args_temp_list <- list()
+                if (is_df) {
+                    for (u in seq_along(args_temp))
+                        args_temp_list[[u]] <- eval(parse(text = args_temp[u]))
+                }
+                else if (!isTRUE(is_df)) {
+                    for (h in seq_along(data)) {
+                        temp_list <- list()
+                        for (u in seq_along(args_temp)) {
+                            temp_list[[u]] <- eval(parse(text = args_temp[u]))
+                            names(temp_list)[u] <- arg_names[u]
+                        }
+                        args_temp_list[[h]] <- temp_list
+                    }
+                }
+                if (is_df) {
+                    names(args_temp_list) <- arg_names
+                    data[, to_transform_plain[i]] <- do.call(
+                        what = transformer_fun[i],
+                        args = args_temp_list)
+                }
+                else if (!isTRUE(is_df)) {
+                    for (j in seq_along(data)) {
+                        data[[j]][, to_transform_plain[i]] <- do.call(
+                            what = transformer_fun[i],
+                            args = args_temp_list[[j]])
+                    }
+                }
+            }
+            return(data)
+        }
+    }
+    else if (length(to_transform) == 0) {
+        if (!missing(f_out)) return(formula)
+        else if (d_out) return(data)
+    }
+}
+
+
+#' Remove package names from fitted model object calls.
+#'
+#' Enables \code{\link{from_zelig_model}} output to work with stargazer.
+#' @param x a fitted model object result
+#' @keywords internal
+
+strip_package_name <- function(x) {
+    if ("vglm" %in% class(x)) # maybe generalise to all s4?
+        call_temp <- gsub('^.*(?=(::))', '', x@call[1], perl = TRUE)
+    else
+        call_temp <- gsub('^.*(?=(::))', '', x$call[1], perl = TRUE)
+    call_temp <- gsub('::', '', call_temp, perl = TRUE)
+    if ("vglm" %in% class(x))
+        x@call[1] <- as.call(list(as.symbol(call_temp)))
+    else
+        x$call[1] <- as.call(list(as.symbol(call_temp)))
+    return(x)
+}
+
+#' Extract p-values from a fitted model object
+#' @param x a fitted Zelig object
+#' @keywords internal
+
+p_pull <- function(x) {
+    if ("vglm" %in% class(x)) { # maybe generalise to all s4?
+        p_values <- summary(x)@coef3[, 'Pr(>|z|)']
+    }
+    else {
+        p_values <- summary(x)$coefficients
+        if ('Pr(>|t|)' %in% colnames(p_values)) {
+            p_values <- p_values[, 'Pr(>|t|)']
+        } else {
+            p_values <- p_values[, 'Pr(>|z|)']
+        }
+    }
+
+    return(p_values)
+}
+
+#' Extract standard errors from a fitted model object
+#' @param x a fitted Zelig object
+#' @keywords internal
+
+se_pull <- function(x) {
+    if ("vglm" %in% class(x)) # maybe generalise to all s4?
+        se <- summary(x)@coef3[, "Std. Error"]
+    else
+        se <- summary(x)$coefficients[, "Std. Error"]
+    return(se)
+}
+
+#' Drop intercept columns or values from a data frame or named vector,
+#'   respectively
+#'
+#' @param x a data frame or named vector
+#' @keywords internal
+
+rm_intercept <- function(x) {
+    intercept_names <- c('(Intercept)', 'X.Intercept.', '(Intercept).*')
+    names_x <- names(x)
+    if (any(intercept_names %in% names(x))) {
+        keep <- !(names(x) %in% intercept_names)
+        if (is.data.frame(x))
+            x <- data.frame(x[, names_x[keep]])
+        else if (is.vector(x))
+            x <- x[keep]
+        names(x) <- names_x[keep]
+    }
+    return(x)
+}
+
+
+#' Combines estimated coefficients and associated statistics
+#' from models estimated with multiply imputed data sets or bootstrapped
+#'
+#' @param obj a zelig object with an estimated model
+#' @param out_type either \code{"matrix"} or \code{"list"} specifying
+#'   whether the results should be returned as a matrix or a list.
+#' @param bagging logical whether or not to bag the bootstrapped coefficients
+#' @param messages logical whether or not to return messages for what is being
+#'   returned
+#'
+#' @return If the model uses multiply imputed or bootstrapped data then a
+#'  matrix (default) or list of combined coefficients (\code{coef}), standard
+#'  errors (\code{se}), z values (\code{zvalue}), p-values (\code{p}) is
+#'  returned. Rubin's Rules are used to combine output from multiply imputed
+#'  data. An error is returned if no imputations were included or there wasn't
+#'  bootstrapping. Please use \code{get_coef}, \code{get_se}, and
+#'  \code{get_pvalue} methods instead in cases where there are no imputations or
+#'  bootstrap.
+#'
+#' @examples
+#' set.seed(123)
+#'
+#' ## Multiple imputation example
+#' # Create fake imputed data
+#' n <- 100
+#' x1 <- runif(n)
+#' x2 <- runif(n)
+#' y <- rnorm(n)
+#' data.1 <- data.frame(y = y, x = x1)
+#' data.2 <- data.frame(y = y, x = x2)
+#'
+#' # Estimate model
+#' mi.out <- to_zelig_mi(data.1, data.2)
+#' z.out.mi <- zelig(y ~ x, model = "ls", data = mi.out)
+#'
+#' # Combine and extract coefficients and standard errors
+#' combine_coef_se(z.out.mi)
+#'
+#' ## Bootstrap example
+#' z.out.boot <- zelig(y ~ x, model = "ls", data = data.1, bootstrap = 20)
+#' combine_coef_se(z.out.boot)
+#'
+#' @author Christopher Gandrud and James Honaker
+#' @source Partially based on \code{\link{mi.meld}} from Amelia.
+#'
+#' @export
+
+combine_coef_se <- function(obj, out_type = 'matrix', bagging = FALSE,
+                            messages = TRUE)
+{
+    is_zelig(obj)
+    is_uninitializedField(obj$zelig.out)
+    if (!(out_type %in% c('matrix', 'list')))
+        stop('out_type must be either "matrix" or "list"', call. = FALSE)
+
+    if (obj$mi || obj$bootstrap) {
+        coeflist <- obj$get_coef()
+        vcovlist <- obj$get_vcov()
+        coef_names <- names(coeflist[[1]])
+
+        am.m <- length(coeflist)
+        if (obj$bootstrap & !obj$mi) am.m <- am.m - 1
+        am.k <- length(coeflist[[1]])
+        if (obj$bootstrap & !obj$mi)
+            q <- matrix(unlist(coeflist[-(am.m + 1)]), nrow = am.m,
+                        ncol = am.k, byrow = TRUE)
+        else if (obj$mi) {
+            q <- matrix(unlist(coeflist), nrow = am.m, ncol = am.k,
+                        byrow = TRUE)
+            se <- matrix(NA, nrow = am.m, ncol = am.k)
+            for(i in 1:am.m){
+                se[i, ] <- sqrt(diag(vcovlist[[i]]))
+            }
+        }
+        ones <- matrix(1, nrow = 1, ncol = am.m)
+        comb_q <- (ones %*% q)/am.m
+        if (obj$mi) ave.se2 <- (ones %*% (se^2)) / am.m
+        diff <- q - matrix(1, nrow = am.m, ncol = 1) %*% comb_q
+        sq2 <- (ones %*% (diff^2))/(am.m - 1)
+        if (obj$mi) {
+            if (messages) message('Combining imputations. . .')
+            comb_se <- sqrt(ave.se2 + sq2 * (1 + 1/am.m))
+            coef <- as.vector(comb_q)
+            se <- as.vector(comb_se)
+        }
+
+        else if (obj$bootstrap  & !obj$mi) {
+            if (messages) message('Combining bootstraps . . .')
+            comb_se <- sqrt(sq2 * (1 + 1/am.m))
+            if (bagging) {
+                coef <- as.vector(comb_q)
+            } else {
+                coef <- coeflist[[am.m + 1]]
+            }
+            se <- as.vector(comb_se)
+        }
+
+        zvalue <- coef / se
+        pr_z <- 2 * (1 - pnorm(abs(zvalue)))
+
+        if (out_type == 'matrix') {
+            out <- cbind(coef, se, zvalue, pr_z)
+            colnames(out) <- c("Estimate", "Std.Error", "z value", "Pr(>|z|)")
+            rownames(out) <- coef_names
+        }
+        else if (out_type == 'list') {
+            out <- list(coef = coef, se = se, zvalue = zvalue, p = pr_z)
+            for (i in seq(out)) names(out[[i]]) <- coef_names
+        }
+        return(out)
+    }
+    else if (!(obj$mi || obj$bootstrap)) {
+    message('No multiply imputed or bootstrapped estimates found.\nReturning untransformed list of coefficients and standard errors.')
+        out <- list(coef = coef(obj),
+                          se = get_se(obj),
+                          pvalue = get_pvalue(obj)
+                          )
+
+        return(out)
+    }
+}
+
+#' Find vcov for GEE models
+#'
+#' @param obj a \code{geeglm} class object.
+
+vcov_gee <- function(obj) {
+    if (!("geeglm" %in% class(obj)))
+        stop('Not a geeglm class object', call. = FALSE)
+    out <- obj$geese$vbeta
+    return(out)
+}
+
+#' Find vcov for quantile regression models
+#'
+#' @param obj a \code{rq} class object.
+
+vcov_rq <- function(obj) {
+    if (!("rq" %in% class(obj)))
+        stop('Not an rq class object', call. = FALSE)
+    out <- summary(obj, cov = TRUE)$cov
+    return(out)
+}
+
+#' Find odds ratios for coefficients and standard errors
+#' for glm.summary class objects
+#'
+#' @param obj a \code{glm.summary} class object
+#' @param label_mod_coef character string for how to modify the coefficient
+#' label.
+#' @param label_mod_se character string for how to modify the standard error
+#' label.
+
+or_summary <- function(obj, label_mod_coef = "(OR)",
+                        label_mod_se = "(OR)"){
+    if (class(obj) != "summary.glm")
+        stop("obj must be of summary.glm class.",
+             call. = FALSE)
+
+        obj$coefficients[, 1] <- exp(obj$coefficients[, 1])
+
+        var_diag = diag(vcov(obj))
+        obj$coefficients[, 2] <- sqrt(obj$coefficients[, 1] ^ 2 * var_diag)
+
+        colnames(obj$coefficients)[c(1, 2)] <- paste(
+                                colnames(obj$coefficients)[c(1, 2)],
+                                c(label_mod_coef, label_mod_se))
+        return(obj)
+}
diff --git a/R/wrappers.R b/R/wrappers.R
new file mode 100755
index 0000000..69f791d
--- /dev/null
+++ b/R/wrappers.R
@@ -0,0 +1,463 @@
+#' Estimating a Statistical Model
+#'
+#' The zelig function estimates a variety of statistical
+#' models. Use \code{zelig} output with \code{setx} and \code{sim} to compute
+#' quantities of interest, such as predicted probabilities, expected values, and
+#' first differences, along with the associated measures of uncertainty
+#' (standard errors and confidence intervals).
+#'
+#' This documentation describes the \code{zelig} Zelig 4 compatibility wrapper
+#' function.
+#'
+#' @param formula a symbolic representation of the model to be
+#'   estimated, in the form \code{y \~\, x1 + x2}, where \code{y} is the
+#'   dependent variable and \code{x1} and \code{x2} are the explanatory
+#'   variables, and \code{y}, \code{x1}, and \code{x2} are contained in the
+#'   same dataset. (You may include more than two explanatory variables,
+#'   of course.) The \code{+} symbol means ``inclusion'' not
+#'   ``addition.'' You may also include interaction terms and main
+#'   effects in the form \code{x1*x2} without computing them in prior
+#'   steps; \code{I(x1*x2)} to include only the interaction term and
+#'   exclude the main effects; and quadratic terms in the form
+#'   \code{I(x1^2)}.
+#' @param model the name of a statistical model to estimate.
+#'   For a list of other supported models and their documentation see:
+#'   \url{http://docs.zeligproject.org/articles/}.
+#' @param data the name of a data frame containing the variables
+#'   referenced in the formula or a list of multiply imputed data frames
+#'   each having the same variable names and row numbers (created by
+#'   \code{Amelia} or \code{\link{to_zelig_mi}}).
+#' @param ... additional arguments passed to \code{zelig},
+#'   relevant for the model to be estimated.
+#' @param by a factor variable contained in \code{data}. If supplied,
+#'   \code{zelig} will subset
+#'   the data frame based on the levels in the \code{by} variable, and
+#'   estimate a model for each subset. This can save a considerable amount of
+#'   effort. For example, to run the same model on all fifty states, you could
+#'   use: \code{z.out <- zelig(y ~ x1 + x2, data = mydata, model = 'ls',
+#'   by = 'state')} You may also use \code{by} to run models using MatchIt
+#'   subclasses.
+#' @param cite If is set to 'TRUE' (default), the model citation will be printed
+#'   to the console.
+#'
+#' @details
+#' Additional parameters avaialable to many models include:
+#' \itemize{
+#'   \item weights: vector of weight values or a name of a variable in the dataset
+#'   by which to weight the model. For more information see:
+#'   \url{http://docs.zeligproject.org/articles/weights.html}.
+#'   \item bootstrap: logical or numeric. If \code{FALSE} don't use bootstraps to
+#'   robustly estimate uncertainty around model parameters due to sampling error.
+#'   If an integer is supplied, the number of boostraps to run.
+#'   For more information see:
+#'   \url{http://docs.zeligproject.org/articles/bootstraps.html}.
+#' }
+
+#'
+#' @return Depending on the class of model selected, \code{zelig} will return
+#'   an object with elements including \code{coefficients}, \code{residuals},
+#'   and \code{formula} which may be summarized using
+#'   \code{summary(z.out)} or individually extracted using, for example,
+#'   \code{coef(z.out)}. See
+#'   \url{http://docs.zeligproject.org/articles/getters.html} for a list of
+#'   functions to extract model components. You can also extract whole fitted
+#'   model objects using \code{\link{from_zelig_model}}.
+#'
+#' @seealso \url{http://docs.zeligproject.org/articles/}
+#' @name zelig
+#' @author Matt Owen, Kosuke Imai, Olivia Lau, and Gary King
+#' @export
+
+zelig <- function(formula,
+                  model,
+                  data,
+                  ...,
+                  by = NULL,
+                  cite = TRUE) {
+    # .Deprecated('\nz$new() \nz$zelig(...)') Check if required model argument is
+    # specified
+    if (missing(model))
+        stop("Estimation model type not specified.\nSelect estimation model type with the model argument.",
+            call. = FALSE)
+
+    # Zelig Core
+    zeligmodels <- system.file(file.path("JSON", "zelig5models.json"),
+                               package = "Zelig")
+    models <- jsonlite::fromJSON(txt = readLines(zeligmodels))$zelig5models
+    # Zelig Choice
+    zeligchoicemodels <- system.file(file.path("JSON", "zelig5choicemodels.json"),
+                                     package = "ZeligChoice")
+    if (zeligchoicemodels != "")
+        models <- c(models, jsonlite::fromJSON(txt = readLines(zeligchoicemodels))$zelig5choicemodels)
+    # Zelig Panel
+    zeligpanelmodels <- system.file(file.path("JSON", "zelig5panelmodels.json"),
+                                    package = "ZeligPanel")
+    if (zeligpanelmodels != "")
+        models <- c(models, jsonlite::fromJSON(txt = readLines(zeligpanelmodels))$zelig5panelmodels)
+    # Zelig GAM
+    zeligammodels <- system.file(file.path("JSON", "zelig5gammodels.json"),
+                                 package = "ZeligGAM")
+    if (zeligammodels != "")
+        models <- c(models, jsonlite::fromJSON(txt = readLines(zeligammodels))$zelig5gammodels)
+    # Zelig Multilevel
+    zeligmixedmodels <- system.file(file.path("JSON", "zelig5mixedmodels.json"),
+        package = "ZeligMultilevel")
+    if (zeligmixedmodels != "")
+        models <- c(models, jsonlite::fromJSON(txt = readLines(zeligmixedmodels))$zelig5mixedmodels)
+    # Aggregating all available models
+    models4 <- list()
+    for (i in seq(models)) {
+        models4[[models[[i]]$wrapper]] <- names(models)[i]
+    }
+
+    model.init <- sprintf("z%s$new()", models4[[model]])
+    if (length(model.init) == 0)
+        stop(sprintf("%s is not a supported model type.", model), call. = FALSE)
+    z5 <- try(eval(parse(text = model.init)), silent = TRUE)
+    if ("try-error" %in% class(z5))
+        stop("Model '", model, "' not found")
+    ## End: Zelig 5 models
+    mf <- match.call()
+    mf$model <- NULL
+    mf$cite <- NULL
+    mf[[1]] <- quote(z5$zelig)
+    mf <- try(eval(mf, environment()), silent = TRUE)
+    if ("try-error" %in% class(mf))
+        z5$zelig(formula = formula, data = data, ..., by = by)
+    if (cite)
+        z5$cite()
+    return(z5)
+}
+
+#' Setting Explanatory Variable Values
+#'
+#' The \code{setx} function uses the variables identified in
+#' the \code{formula} generated by \code{zelig} and sets the values of
+#' the explanatory variables to the selected values. Use \code{setx}
+#' after \code{zelig} and before \code{sim} to simulate quantities of
+#' interest.
+#'
+#' This documentation describes the \code{setx} Zelig 4 compatibility wrapper
+#' function.
+#'
+#' @param obj output object from \code{\link{zelig}}
+#' @param fn a list of functions to apply to the data frame
+#' @param data a new data frame used to set the values of
+#'   explanatory variables. If \code{data = NULL} (the default), the
+#'   data frame called in \code{\link{zelig}} is used
+#' @param cond a logical value indicating whether unconditional
+#'   (default) or conditional (choose \code{cond = TRUE}) prediction
+#'   should be performed. If you choose \code{cond = TRUE}, \code{setx}
+#'   will coerce \code{fn = NULL} and ignore the additional arguments in
+#'   \code{\dots}. If \code{cond = TRUE} and \code{data = NULL},
+#'   \code{setx} will prompt you for a data frame.
+#' @param ... user-defined values of specific variables for overwriting the
+#'   default values set by the function \code{fn}. For example, adding
+#'   \code{var1 = mean(data\$var1)} or \code{x1 = 12} explicitly sets the value
+#'   of \code{x1} to 12. In addition, you may specify one explanatory variable
+#'   as a range of values, creating one observation for every unique value in
+#'   the range of values
+#' @return The output is returned in a field to the Zelig object. For
+#'   unconditional prediction, \code{x.out} is a model matrix based
+#'   on the specified values for the explanatory variables. For multiple
+#'   analyses (i.e., when choosing the \code{by} option in \code{\link{zelig}},
+#'   \code{setx} returns the selected values calculated over the entire
+#'   data frame. If you wish to calculate values over just one subset of
+#'   the data frame, the 5th subset for example, you may use:
+#'   \code{x.out <- setx(z.out[[5]])}
+#'
+#' @examples
+#' # Unconditional prediction:
+#' data(turnout)
+#' z.out <- zelig(vote ~ race + educate, model = 'logit', data = turnout)
+#' x.out <- setx(z.out)
+#' s.out <- sim(z.out, x = x.out)
+#'
+#' @author Matt Owen, Olivia Lau and Kosuke Imai
+#' @seealso The full Zelig manual may be accessed online at
+#'   \url{http://docs.zeligproject.org/articles/}
+#' @keywords file
+#' @export
+
+setx <- function(obj, fn = NULL, data = NULL, cond = FALSE, ...) {
+    # .Deprecated('\nz$new() \nz$zelig(...) \nz$setx() or z$setx1 or z$setrange')
+
+    if(!is_zelig(obj, fail = FALSE))
+        obj <- to_zelig(obj)
+
+    x5 <- obj$copy()
+    # This is the length of each argument in '...'s
+    s <- list(...)
+    if (length(s) > 0) {
+        hold <- rep(1, length(s))
+        for (i in 1:length(s)) {
+            hold[i] <- length(s[i][[1]])
+        }
+    } else {
+        hold <- 1
+    }
+    if (max(hold) > 1) {
+        x5$setrange(...)
+    } else {
+        x5$setx(...)
+    }
+    return(x5)
+}
+
+#' Setting Explanatory Variable Values for First Differences
+#'
+#' This documentation describes the \code{setx1} Zelig 4 compatibility wrapper
+#' function. The wrapper is primarily useful for setting fitted values
+#' for creating first differences in piped workflows.
+#'
+#' @param obj output object from \code{\link{zelig}}
+#' @param fn a list of functions to apply to the data frame
+#' @param data a new data frame used to set the values of
+#'   explanatory variables. If \code{data = NULL} (the default), the
+#'   data frame called in \code{\link{zelig}} is used
+#' @param cond a logical value indicating whether unconditional
+#'   (default) or conditional (choose \code{cond = TRUE}) prediction
+#'   should be performed. If you choose \code{cond = TRUE}, \code{setx1}
+#'   will coerce \code{fn = NULL} and ignore the additional arguments in
+#'   \code{\dots}. If \code{cond = TRUE} and \code{data = NULL},
+#'   \code{setx1} will prompt you for a data frame.
+#' @param ... user-defined values of specific variables for overwriting the
+#'   default values set by the function \code{fn}. For example, adding
+#'   \code{var1 = mean(data\$var1)} or \code{x1 = 12} explicitly sets the value
+#'   of \code{x1} to 12. In addition, you may specify one explanatory variable
+#'   as a range of values, creating one observation for every unique value in
+#'   the range of values
+#' @return The output is returned in a field to the Zelig object. For
+#'   unconditional prediction, \code{x.out} is a model matrix based
+#'   on the specified values for the explanatory variables. For multiple
+#'   analyses (i.e., when choosing the \code{by} option in \code{\link{zelig}},
+#'   \code{setx1} returns the selected values calculated over the entire
+#'   data frame. If you wish to calculate values over just one subset of
+#'   the data frame, the 5th subset for example, you may use:
+#'   \code{x.out <- setx(z.out[[5]])}
+#'
+#' @examples
+#' library(dplyr) # contains pipe operator %>%
+#' data(turnout)
+#'
+#' # plot first differences
+#' zelig(Fertility ~ Education, data = swiss, model = 'ls') %>%
+#'       setx(z4, Education = 10) %>%
+#'       setx1(z4, Education = 30) %>%
+#'       sim() %>%
+#'       plot()
+#'
+#' @author Christopher Gandrud, Matt Owen, Olivia Lau, Kosuke Imai
+#' @seealso The full Zelig manual may be accessed online at
+#'   \url{http://docs.zeligproject.org/articles/}
+#' @keywords file
+#' @export
+
+setx1 <- function(obj, fn = NULL, data = NULL, cond = FALSE, ...) {
+  is_zelig(obj)
+
+  x5 <- obj$copy()
+  # This is the length of each argument in '...'s
+  s <- list(...)
+  if (length(s) > 0) {
+    hold <- rep(1, length(s))
+    for (i in 1:length(s)) {
+      hold[i] <- length(s[i][[1]])
+    }
+  } else {
+    hold <- 1
+  }
+  if (max(hold) > 1) {
+    x5$setrange1(...)
+  } else {
+    x5$setx1(...)
+  }
+  return(x5)
+}
+
+#' Generic Method for Computing and Organizing Simulated Quantities of Interest
+#'
+#' Simulate quantities of interest from the estimated model
+#' output from \code{zelig()} given specified values of explanatory
+#' variables established in \code{setx()}. For classical \emph{maximum
+#' likelihood} models, \code{sim()} uses asymptotic normal
+#' approximation to the log-likelihood. For \emph{Bayesian models},
+#' Zelig simulates quantities of interest from the posterior density,
+#' whenever possible. For \emph{robust Bayesian models}, simulations
+#' are drawn from the identified class of Bayesian posteriors.
+#' Alternatively, you may generate quantities of interest using
+#' bootstrapped parameters.
+#'
+#' This documentation describes the \code{sim} Zelig 4 compatibility wrapper
+#' function.
+#'
+#' @param obj output object from \code{zelig}
+#' @param x values of explanatory variables used for simulation,
+#'   generated by \code{setx}. Not if ommitted, then \code{sim} will look for
+#'   values in the reference class object
+#' @param x1 optional values of explanatory variables (generated by a
+#'   second call of \code{setx})
+#'           particular computations of quantities of interest
+#' @param y a parameter reserved for the computation of particular
+#'          quantities of interest (average treatment effects). Few
+#'          models currently support this parameter
+#' @param num an integer specifying the number of simulations to compute
+#' @param bootstrap currently unsupported
+#' @param bootfn currently unsupported
+#' @param cond.data currently unsupported
+#' @param ... arguments reserved future versions of Zelig
+#' @return The output stored in \code{s.out} varies by model. Use the
+#'  \code{names} function to view the output stored in \code{s.out}.
+#'  Common elements include:
+#'  \item{x}{the \code{\link{setx}} values for the explanatory variables,
+#'    used to calculate the quantities of interest (expected values,
+#'    predicted values, etc.). }
+#'  \item{x1}{the optional \code{\link{setx}} object used to simulate
+#'    first differences, and other model-specific quantities of
+#'    interest, such as risk-ratios.}
+#'  \item{call}{the options selected for \code{\link{sim}}, used to
+#'    replicate quantities of interest. }
+#'  \item{zelig.call}{the original function and options for
+#'    \code{\link{zelig}}, used to replicate analyses. }
+#'  \item{num}{the number of simulations requested. }
+#'  \item{par}{the parameters (coefficients, and additional
+#'    model-specific parameters). You may wish to use the same set of
+#'    simulated parameters to calculate quantities of interest rather
+#'    than simulating another set.}
+#'  \item{qi\$ev}{simulations of the expected values given the
+#'    model and \code{x}. }
+#'  \item{qi\$pr}{simulations of the predicted values given by the
+#'    fitted values. }
+#'  \item{qi\$fd}{simulations of the first differences (or risk
+#'    difference for binary models) for the given \code{x} and \code{x1}.
+#'    The difference is calculated by subtracting the expected values
+#'    given \code{x} from the expected values given \code{x1}. (If do not
+#'    specify \code{x1}, you will not get first differences or risk
+#'    ratios.) }
+#'  \item{qi\$rr}{simulations of the risk ratios for binary and
+#'    multinomial models. See specific models for details.}
+#'  \item{qi\$ate.ev}{simulations of the average expected
+#'    treatment effect for the treatment group, using conditional
+#'    prediction. Let \eqn{t_i} be a binary explanatory variable defining
+#'    the treatment (\eqn{t_i=1}) and control (\eqn{t_i=0}) groups. Then the
+#'    average expected treatment effect for the treatment group is
+#'    \deqn{ \frac{1}{n}\sum_{i=1}^n [ \, Y_i(t_i=1) -
+#'      E[Y_i(t_i=0)] \mid t_i=1 \,],}
+#'    where \eqn{Y_i(t_i=1)} is the value of the dependent variable for
+#'    observation \eqn{i} in the treatment group. Variation in the
+#'    simulations are due to uncertainty in simulating \eqn{E[Y_i(t_i=0)]},
+#'    the counterfactual expected value of \eqn{Y_i} for observations in the
+#'    treatment group, under the assumption that everything stays the
+#'    same except that the treatment indicator is switched to \eqn{t_i=0}. }
+#'  \item{qi\$ate.pr}{simulations of the average predicted
+#'    treatment effect for the treatment group, using conditional
+#'    prediction. Let \eqn{t_i} be a binary explanatory variable defining
+#'    the treatment (\eqn{t_i=1}) and control (\eqn{t_i=0}) groups. Then the
+#'    average predicted treatment effect for the treatment group is
+#'    \deqn{ \frac{1}{n}\sum_{i=1}^n [ \, Y_i(t_i=1) -
+#'      \widehat{Y_i(t_i=0)} \mid t_i=1 \,],}
+#'    where \eqn{Y_i(t_i=1)} is the value of the dependent variable for
+#'    observation \eqn{i} in the treatment group. Variation in the
+#'    simulations are due to uncertainty in simulating
+#'    \eqn{\widehat{Y_i(t_i=0)}}, the counterfactual predicted value of
+#'    \eqn{Y_i} for observations in the treatment group, under the
+#'    assumption that everything stays the same except that the
+#'    treatment indicator is switched to \eqn{t_i=0}.}
+#'
+#' @author Christopher Gandrud, Matt Owen, Olivia Lau and Kosuke Imai
+#' @export
+
+sim <- function(obj, x, x1, y = NULL, num = 1000, bootstrap = F,
+    bootfn = NULL, cond.data = NULL, ...) {
+    # .Deprecated('\nz$new() \n[...] \nz$sim(...)')
+    is_zelig(obj)
+
+    if (!missing(x)) s5 <- x$copy()
+    if (!missing(x1)) {
+        s15 <- x1$copy()
+        if (!is.null(s15$setx.out$x)) {
+            s5$setx.out$x1 <- s15$setx.out$x
+            s5$bsetx1 <- TRUE
+        }
+        if (!is.null(s15$setx.out$range)) {
+            s5$range1 <- s15$range
+            s5$setx.out$range1 <- s15$setx.out$range
+            s5$bsetrange1 <- TRUE
+        }
+    }
+    if (missing(x)) s5 <- obj$copy()
+
+    s5$sim(num = num)
+    return(s5)
+}
+
+#' Extract standard errors from a Zelig estimated model
+#'
+#' @param object an object of class Zelig
+#' @author Christopher Gandrud
+#' @export
+
+get_se <- function(object) {
+    is_zelig(object)
+    out <- object$get_se()
+    return(out)
+}
+
+#' Extract p-values from a Zelig estimated model
+#'
+#' @param object an object of class Zelig
+#' @author Christopher Gandrud
+#' @export
+
+get_pvalue <- function(object) {
+    is_zelig(object)
+    out <- object$get_pvalue()
+    return(out)
+}
+
+#' Extract quantities of interest from a Zelig simulation
+#'
+#' @param object an object of class Zelig
+#' @param qi character string with the name of quantity of interest desired:
+#'   `"ev"` for expected values, `"pv"` for predicted values or
+#'   `"fd"` for first differences.
+#' @param xvalue chracter string stating which of the set of values of `x`
+#'    should be used for getting the quantity of interest.
+#' @param subset subset for multiply imputed data (only relevant if multiply
+#'    imputed data is supplied in the original call.)
+#' @author Christopher Gandrud
+#' @md
+#' @export
+
+get_qi <- function(object, qi = "ev", xvalue = "x", subset = NULL) {
+    is_zelig(object)
+    out <- object$get_qi(qi = qi, xvalue = xvalue, subset = subset)
+    return(out)
+}
+
+#' Compute simulated (sample) average treatment effects on the treated from
+#' a Zelig model estimation
+#'
+#' @param object an object of class Zelig
+#' @param treatment character string naming the variable that denotes the
+#'   treatment and non-treated groups.
+#' @param treated value of `treatment` variable indicating treatment
+#' @param num number of simulations to run. Default is 1000.
+#' @examples
+#' library(dplyr)
+#' data(sanction)
+#' z.att <- zelig(num ~ target + coop + mil, model = "poisson",
+#'                  data = sanction) %>%
+#'              ATT(treatment = "mil") %>%
+#'              get_qi(qi = "ATT", xvalue = "TE")
+#'
+#' @author Christopher Gandrud
+#' @md
+#' @export
+
+ATT <- function(object, treatment, treated = 1, num = NULL) {
+    is_zelig(object)
+    object$ATT(treatment = treatment, treated = treated,
+               quietly = TRUE, num = num)
+    return(object)
+}
diff --git a/README.md b/README.md
new file mode 100644
index 0000000..33391ec
--- /dev/null
+++ b/README.md
@@ -0,0 +1,224 @@
+<!-- README.md is generated from README.Rmd. Please edit that file -->
+[![zelig-logo](man/figures/zelig.png)](https://zeligproject.org/)
+
+<!--- Badges ----->
+**Release:** [![CRAN
+Version](https://www.r-pkg.org/badges/version/Zelig)](https://CRAN.R-project.org/package=Zelig)
+![CRAN Monthly
+Downloads](http://cranlogs.r-pkg.org/badges/last-month/Zelig) ![CRAN
+Total Downloads](http://cranlogs.r-pkg.org/badges/grand-total/Zelig)
+
+**Development:** [![Project Status: Active - The project has reached a
+stable, usable state and is being actively
+developed.](http://www.repostatus.org/badges/latest/active.svg)](https://www.repostatus.org/)
+[![Travis (LINUX) Build
+Status](https://travis-ci.org/IQSS/Zelig.svg?branch=master)](https://travis-ci.org/IQSS/Zelig)
+[![AppVeyor (Windows) Build
+Status](https://ci.appveyor.com/api/projects/status/github/IQSS/Zelig?branch=master&svg=true)](https://ci.appveyor.com/project/IQSS/Zelig)
+[![codecov](https://codecov.io/gh/IQSS/Zelig/branch/master/graph/badge.svg)](https://codecov.io/gh/IQSS/Zelig)
+[Dev-Blog](https://medium.com/zelig-dev)
+
+Zelig workflow overview
+-----------------------
+
+All models in Zelig can be estimated and results explored presented
+using four simple functions:
+
+1.  `zelig` to estimate the parameters,
+
+2.  `setx` to set fitted values for which we want to find quantities of
+    interest,
+
+3.  `sim` to simulate the quantities of interest,
+
+4.  `plot` to plot the simulation results.
+
+#### Zelig 5 reference classes
+
+Zelig 5 introduced [reference classes](http://adv-r.had.co.nz/R5.html).
+These enable a different way of working with Zelig that is detailed in
+[a separate
+vignette](http://docs.zeligproject.org/articles/zelig5_vs_zelig4.html).
+Directly using the reference class architecture is optional. They are
+not used in the examples below.
+
+Zelig Quickstart Guide
+----------------------
+
+Let’s walk through an example. This example uses the swiss dataset. It
+contains data on fertility and socioeconomic factors in Switzerland’s 47
+French-speaking provinces in 1888 (Mosteller and Tukey, 1977, 549-551).
+We will model the effect of education on fertility, where education is
+measured as the percent of draftees with education beyond primary school
+and fertility is measured using the common standardized fertility
+measure (see Muehlenbein (2010, 80-81) for details).
+
+Installing and Loading Zelig
+----------------------------
+
+If you haven't already done so, open your R console and install Zelig.
+We recommend installing Zelig with the zeligverse package. This installs
+core Zelig and ancillary packages at once.
+
+    install.packages('zeligverse')
+
+Alternatively you can install the development version of Zelig with:
+
+    devtools::install_github('IQSS/Zelig')
+
+Once Zelig is installed, load it:
+
+    library(zeligverse)
+
+Building Models
+---------------
+
+Let’s assume we want to estimate the effect of education on fertility.
+Since fertility is a continuous variable, least squares (`ls`) is an
+appropriate model choice. To estimate our model, we call the `zelig()`
+function with three two arguments: equation, model type, and data:
+
+    # load data
+    data(swiss)
+
+    # estimate ls model
+    z5_1 <- zelig(Fertility ~ Education, model = "ls", data = swiss, cite = FALSE)
+
+    # model summary
+    summary(z5_1)
+
+    ## Model: 
+    ## 
+    ## Call:
+    ## z5$zelig(formula = Fertility ~ Education, data = swiss)
+    ## 
+    ## Residuals:
+    ##     Min      1Q  Median      3Q     Max 
+    ## -17.036  -6.711  -1.011   9.526  19.689 
+    ## 
+    ## Coefficients:
+    ##             Estimate Std. Error t value Pr(>|t|)
+    ## (Intercept)  79.6101     2.1041  37.836  < 2e-16
+    ## Education    -0.8624     0.1448  -5.954 3.66e-07
+    ## 
+    ## Residual standard error: 9.446 on 45 degrees of freedom
+    ## Multiple R-squared:  0.4406, Adjusted R-squared:  0.4282 
+    ## F-statistic: 35.45 on 1 and 45 DF,  p-value: 3.659e-07
+    ## 
+    ## Next step: Use 'setx' method
+
+The -0.86 coefficient on education suggests a negative relationship
+between the education of a province and its fertility rate. More
+precisely, for every one percent increase in draftees educated beyond
+primary school, the fertility rate of the province decreases 0.86 units.
+To help us better interpret this finding, we may want other quantities
+of interest, such as expected values or first differences. Zelig makes
+this simple by automating the translation of model estimates into
+interpretable quantities of interest using Monte Carlo simulation
+methods (see King, Tomz, and Wittenberg (2000) for more information).
+For example, let’s say we want to examine the effect of increasing the
+percent of draftees educated from 5 to 15. To do so, we set our
+predictor value using the `setx()` and `setx1()` functions:
+
+    # set education to 5 and 15
+    z5_1 <- setx(z5_1, Education = 5)
+    z5_1 <- setx1(z5_1, Education = 15)
+
+    # model summary
+    summary(z5_1)
+
+    ## setx:
+    ##   (Intercept) Education
+    ## 1           1         5
+    ## setx1:
+    ##   (Intercept) Education
+    ## 1           1        15
+    ## 
+    ## Next step: Use 'sim' method
+
+After setting our predictor value, we simulate using the `sim()` method:
+
+    # run simulations and estimate quantities of interest
+    z5_1 <- sim(z5_1)
+
+    # model summary
+    summary(z5_1)
+
+    ## 
+    ##  sim x :
+    ##  -----
+    ## ev
+    ##       mean       sd      50%     2.5%    97.5%
+    ## 1 75.30616 1.658283 75.28057 72.12486 78.48007
+    ## pv
+    ##          mean       sd      50%     2.5%   97.5%
+    ## [1,] 75.28028 9.707597 75.60282 57.11199 94.3199
+    ## 
+    ##  sim x1 :
+    ##  -----
+    ## ev
+    ##       mean       sd      50%     2.5%    97.5%
+    ## 1 66.66467 1.515977 66.63699 63.66668 69.64761
+    ## pv
+    ##          mean       sd      50%     2.5%    97.5%
+    ## [1,] 66.02916 9.441273 66.32583 47.19223 82.98039
+    ## fd
+    ##        mean       sd       50%      2.5%     97.5%
+    ## 1 -8.641488 1.442774 -8.656953 -11.43863 -5.898305
+
+At this point, we’ve estimated a model, set the predictor value, and
+estimated easily interpretable quantities of interest. The `summary()`
+method shows us our quantities of interest, namely, our expected and
+predicted values at each level of education, as well as our first
+differences–the difference in expected values at the set levels of
+education.
+
+Visualizations
+==============
+
+Zelig’s `plot()` function plots the estimated quantities of interest:
+
+    plot(z5_1)
+
+![](man/figures/example_plot_graph-1.png)
+
+We can also simulate and plot simulations from ranges of simulated
+values:
+
+    z5_2 <- zelig(Fertility ~ Education, model = "ls", data = swiss, cite = FALSE)
+
+    # set Education to range from 5 to 15 at single integer increments
+    z5_2 <- setx(z5_2, Education = 5:15)
+
+    # run simulations and estimate quantities of interest
+    z5_2 <- sim(z5_2)
+
+Then use the `plot()` function as before:
+
+    z5_2 <- plot(z5_2)
+
+![](man/figures/example_plot_ci_plot-1.png)
+
+Getting help
+============
+
+The primary documentation for Zelig is available at:
+<http://docs.zeligproject.org/articles/>.
+
+Within R, you can access function help using the normal `?` function,
+e.g.:
+
+    ?setx
+
+If you are looking for details on particular estimation model methods,
+you can also use the `?` function. Simply place a `z` before the model
+name. For example, to access details about the `logit` model use:
+
+    ?zlogit
+
+Building Zelig (for developers)
+===============================
+
+Zelig can be fully checked and build using the code in
+[check\_build\_zelig.R](check_build_zelig.R). Note that this can be time
+consuming due to the extensive test coverage.
diff --git a/data/CigarettesSW.tab.gz b/data/CigarettesSW.tab.gz
new file mode 100644
index 0000000..4f51df6
Binary files /dev/null and b/data/CigarettesSW.tab.gz differ
diff --git a/data/MatchIt.url.tab.gz b/data/MatchIt.url.tab.gz
new file mode 100755
index 0000000..a0e3c33
Binary files /dev/null and b/data/MatchIt.url.tab.gz differ
diff --git a/data/PErisk.txt.gz b/data/PErisk.txt.gz
new file mode 100755
index 0000000..2ac49df
Binary files /dev/null and b/data/PErisk.txt.gz differ
diff --git a/data/SupremeCourt.txt.gz b/data/SupremeCourt.txt.gz
new file mode 100755
index 0000000..78e89ad
Binary files /dev/null and b/data/SupremeCourt.txt.gz differ
diff --git a/data/Weimar.txt.gz b/data/Weimar.txt.gz
new file mode 100755
index 0000000..3c65b88
Binary files /dev/null and b/data/Weimar.txt.gz differ
diff --git a/data/Zelig.url.tab.gz b/data/Zelig.url.tab.gz
new file mode 100755
index 0000000..8eda995
Binary files /dev/null and b/data/Zelig.url.tab.gz differ
diff --git a/data/approval.tab.gz b/data/approval.tab.gz
new file mode 100755
index 0000000..c9de541
Binary files /dev/null and b/data/approval.tab.gz differ
diff --git a/data/bivariate.tab.gz b/data/bivariate.tab.gz
new file mode 100755
index 0000000..c3add84
Binary files /dev/null and b/data/bivariate.tab.gz differ
diff --git a/data/coalition.tab b/data/coalition.tab
deleted file mode 100755
index 9207699..0000000
--- a/data/coalition.tab
+++ /dev/null
@@ -1,315 +0,0 @@
-"duration" "ciep12" "invest" "fract" "polar" "numst2" "crisis"
-"1" 0.5 1 1 656 11 0 24
-"2" 3 1 1 656 11 1 10
-"3" 7 1 1 656 11 1 24
-"4" 20 1 1 656 11 1 7
-"5" 6 1 1 656 11 1 7
-"6" 7 1 1 634 6 1 45
-"7" 2 1 1 599 3 1 51
-"8" 17 1 1 599 3 1 4
-"9" 27 1 1 599 3 1 6
-"10" 49 0 1 620 2 1 10
-"11" 4 1 1 592 1 0 23
-"12" 29 1 1 592 1 1 2
-"13" 49 0 1 628 5 1 29
-"14" 6 1 1 719 11 1 65
-"15" 23 1 1 719 11 1 38
-"16" 41 0 1 757 18 1 132
-"17" 10 1 1 775 24 1 73
-"18" 12 1 1 775 24 1 61
-"19" 2 1 1 762 24 0 65
-"20" 33 1 1 762 24 1 0
-"21" 1 1 1 762 24 0 0
-"22" 16 1 1 735 17 1 46
-"23" 2 1 1 753 17 1 9
-"24" 9 1 1 850 17 1 106
-"25" 3 1 1 850 17 1 7
-"26" 5 1 1 850 17 1 39
-"27" 5 1 1 850 17 1 18
-"28" 6 1 1 850 17 1 6
-"29" 45 0 1 868 15 1 87
-"30" 23 1 1 857 9 1 46
-"31" 41 1 0 648 1 0 0
-"32" 7 1 0 648 1 1 0
-"33" 49 0 0 428 0 1 0
-"34" 46 1 0 536 0 1 0
-"35" 9 1 0 650 0 0 4
-"36" 51 0 0 349 0 1 0
-"37" 10 1 0 648 0 0 0
-"38" 32 1 0 622 0 0 5
-"39" 28 1 0 614 0 0 0
-"40" 3 1 0 614 0 0 0
-"41" 53 0 0 571 0 1 1
-"42" 17 1 0 648 0 0 0
-"43" 59 0 0 583 0 1 62
-"44" 9 1 0 602 0 0 0
-"45" 52 0 0 582 0 1 0
-"46" 3 1 0 582 0 1 0
-"47" 23 1 0 777 12 0 14
-"48" 33 1 0 719 6 0 40
-"49" 1 1 0 749 5 0 36
-"50" 30 1 0 749 5 0 1
-"51" 5 1 0 740 5 0 0
-"52" 16 1 0 725 5 0 3
-"53" 27 1 0 725 5 0 3
-"54" 33 1 0 735 3 1 12
-"55" 9 1 0 735 3 1 2
-"56" 22 1 0 722 6 0 2
-"57" 25 1 0 722 6 0 0
-"58" 25 1 0 715 6 0 3
-"59" 14 1 0 748 11 0 26
-"60" 44 0 0 764 9 1 8
-"61" 12 1 0 746 10 0 14
-"62" 14 1 0 746 10 0 2
-"63" 13 1 0 855 26 0 14
-"64" 24 1 0 815 25 0 15
-"65" 18 1 0 811 26 0 0
-"66" 13 1 0 811 26 0 0
-"67" 29 1 0 791 21 0 28
-"68" 8 1 0 817 21 0 48
-"69" 16 1 0 817 24 0 7
-"70" 43 0 0 802 18 0 14
-"71" 28 1 0 791 25 1 20
-"72" 19 1 0 780 19 0 7
-"73" 10 1 0 780 19 0 16
-"74" 2 1 0 780 19 1 0
-"75" 21 1 0 791 22 1 183
-"76" 4 1 0 791 22 0 12
-"77" 5 1 0 788 22 1 148
-"78" 16 1 0 788 22 1 6
-"79" 15 1 0 788 22 1 14
-"80" 1 1 0 788 22 0 5
-"81" 2 1 0 788 22 0 0
-"82" 2 1 0 788 22 0 0
-"83" 3 1 0 795 25 1 0
-"84" 30 1 0 795 25 0 40
-"85" 8 1 0 795 25 0 16
-"86" 20 1 0 803 24 1 44
-"87" 19 1 0 803 24 1 1
-"88" 21 1 0 799 21 1 43
-"89" 24 1 0 799 21 1 21
-"90" 8 1 0 821 27 1 0
-"91" 7 1 0 821 27 1 9
-"92" 5 1 0 818 28 0 0
-"93" 33 1 0 818 28 1 47
-"94" 10 1 0 812 21 1 0
-"95" 7 1 0 812 21 0 12
-"96" 9 1 0 812 21 1 4
-"97" 13 1 0 812 21 1 13
-"98" 32 1 0 808 21 1 69
-"99" 11 1 0 808 21 1 16
-"100" 3 1 0 808 21 0 1
-"101" 47 0 0 805 22 1 25
-"102" 5 1 1 788 28 1 11
-"103" 1 1 1 788 31 0 18
-"104" 3 1 1 788 31 1 2
-"105" 5 1 1 788 31 1 5
-"106" 1 1 1 788 31 1 0
-"107" 8 1 1 788 31 1 5
-"108" 1 1 1 788 31 1 7
-"109" 0.5 1 1 788 31 1 9
-"110" 13 1 1 788 31 1 4
-"111" 3 1 1 788 31 1 23
-"112" 5 1 1 788 31 0 3
-"113" 0.5 1 1 788 31 0 8
-"114" 7 1 1 788 31 1 8
-"115" 4 1 1 788 31 1 10
-"116" 5 1 1 842 27 0 41
-"117" 1 1 1 842 27 0 13
-"118" 10 1 1 842 27 0 8
-"119" 4 1 1 842 27 0 16
-"120" 11 1 1 842 27 1 38
-"121" 7 1 1 842 27 1 7
-"122" 0.5 1 1 842 27 0 14
-"123" 11 1 1 842 27 1 4
-"124" 16 1 1 839 38 0 8
-"125" 4 1 1 839 38 0 22
-"126" 0.5 1 1 839 38 0 18
-"127" 0.5 1 1 839 38 0 10
-"128" 5 1 1 839 38 1 7
-"129" 0.5 1 1 839 38 1 28
-"130" 4 1 1 839 38 1 0
-"131" 33 1 0 722 19 1 117
-"132" 40 0 0 712 17 1 11
-"133" 34 1 0 709 14 1 77
-"134" 28 1 0 712 15 1 0
-"135" 11 1 0 712 15 0 19
-"136" 5 1 0 687 14 0 0
-"137" 43 0 0 710 17 1 1
-"138" 5 1 0 699 15 1 0
-"139" 43 0 0 699 15 1 0
-"140" 37 0 0 718 15 1 0
-"141" 8 1 0 718 15 1 62
-"142" 36 1 0 740 17 1 30
-"143" 46 0 0 704 18 1 58
-"144" 13 1 0 761 23 1 65
-"145" 2 1 0 761 23 0 3
-"146" 39 0 0 736 18 1 67
-"147" 48 0 0 754 17 1 28
-"148" 40 1 1 724 0 0 0
-"149" 35 1 1 692 0 0 0
-"150" 34 1 1 671 0 1 13
-"151" 27 1 1 636 3 1 0
-"152" 28 1 1 636 3 1 0
-"153" 42 1 1 645 0 0 0
-"154" 19 1 1 620 0 1 0
-"155" 32 1 1 620 0 1 2
-"156" 44 1 1 593 0 1 0
-"157" 52 0 1 612 0 1 0
-"158" 29 1 1 580 0 1 0
-"159" 18 1 1 580 0 1 6
-"160" 7 1 1 582 0 0 0
-"161" 8 1 1 609 0 0 19
-"162" 48 1 1 608 0 1 20
-"163" 11 1 1 802 11 1 71
-"164" 3 1 1 802 11 1 0
-"165" 12 1 1 802 11 1 4
-"166" 18 1 1 802 11 1 28
-"167" 4 1 1 802 11 1 0
-"168" 6 1 1 834 18 1 7
-"169" 16 1 1 834 18 1 0
-"170" 13 1 1 797 16 1 44
-"171" 17 1 1 814 18 1 274
-"172" 18 1 1 814 18 1 10
-"173" 10 1 1 814 18 1 8
-"174" 17 1 1 789 5 1 70
-"175" 21 1 1 789 5 1 0
-"176" 8 1 1 789 5 1 19
-"177" 8 1 1 721 5 1 31
-"178" 41 0 1 721 5 1 0
-"179" 1 1 1 702 8 1 68
-"180" 5 1 1 702 8 1 53
-"181" 26 1 1 702 8 1 0
-"182" 4 1 1 771 11 0 183
-"183" 44 0 1 771 11 1 0
-"184" 25 1 1 680 5 0 36
-"185" 9 1 1 680 5 0 35
-"186" 25 1 1 741 6 1 52
-"187" 2 1 1 652 36 1 7
-"188" 14 1 1 652 36 1 15
-"189" 3 1 1 652 36 1 1
-"190" 23 1 1 652 36 1 10
-"191" 0.5 1 1 718 36 0 17
-"192" 5 1 1 718 36 0 20
-"193" 0.5 1 1 718 36 0 13
-"194" 16 1 1 718 36 1 11
-"195" 22 1 1 718 36 1 14
-"196" 13 1 1 718 36 0 13
-"197" 7 1 1 710 32 0 12
-"198" 12 1 1 710 32 0 20
-"199" 4 1 1 710 32 0 30
-"200" 19 1 1 710 32 0 7
-"201" 15 1 1 710 32 1 19
-"202" 5 1 1 733 32 0 36
-"203" 7 1 1 733 32 1 29
-"204" 18 1 1 733 32 1 26
-"205" 27 1 1 733 32 1 33
-"206" 5 1 1 717 37 0 19
-"207" 7 1 1 717 37 1 23
-"208" 6 1 1 717 37 0 31
-"209" 3 1 1 717 37 1 48
-"210" 7 1 1 717 37 1 31
-"211" 10 1 1 717 37 1 0
-"212" 0.5 1 1 717 37 0 33
-"213" 12 1 1 719 37 1 121
-"214" 8 1 1 719 37 1 25
-"215" 7 1 1 719 37 1 12
-"216" 14 1 1 719 37 0 51
-"217" 3 1 1 719 37 0 36
-"218" 18 1 1 683 43 0 90
-"219" 11 1 1 683 43 0 55
-"220" 0.5 1 1 683 43 0 47
-"221" 8 1 1 710 38 0 126
-"222" 6 1 1 710 38 1 15
-"223" 7 1 1 710 38 1 22
-"224" 13 1 1 710 38 1 33
-"225" 3 1 1 710 38 1 16
-"226" 5 1 1 710 38 1 17
-"227" 45 1 1 751 39 1 98
-"228" 30 1 0 786 8 1 30
-"229" 15 1 0 786 8 1 50
-"230" 46 0 0 785 6 1 69
-"231" 26 1 0 754 5 1 12
-"232" 3 1 0 754 5 1 12
-"233" 48 0 0 759 3 1 63
-"234" 19 1 0 778 8 1 70
-"235" 18 1 0 778 8 1 46
-"236" 3 1 0 778 8 0 39
-"237" 49 0 0 825 16 1 49
-"238" 13 1 0 844 15 1 69
-"239" 4 1 0 844 15 0 23
-"240" 46 0 0 844 9 1 163
-"241" 41 0 0 730 3 1 270
-"242" 8 1 0 767 5 1 108
-"243" 3 1 0 767 5 0 17
-"244" 43 0 0 751 5 1 57
-"245" 47 0 0 685 7 1 0
-"246" 25 1 0 626 0 1 0
-"247" 23 1 0 626 0 1 2
-"248" 15 1 0 677 2 1 0
-"249" 33 1 0 677 2 1 7
-"250" 47 0 0 666 1 1 0
-"251" 23 1 0 689 1 0 0
-"252" 1 1 0 689 1 0 4
-"253" 25 1 0 689 1 0 4
-"254" 47 0 0 715 1 1 1
-"255" 18 1 0 680 0 1 0
-"256" 19 1 0 680 0 0 11
-"257" 12 1 0 680 0 0 11
-"258" 27 1 0 758 13 0 4
-"259" 20 1 0 758 13 0 3
-"260" 41 0 0 663 1 0 0
-"261" 8 1 0 663 1 0 5
-"262" 19 1 0 687 5 0 2
-"263" 28 1 0 687 5 1 0
-"264" 7 1 0 676 5 0 0
-"265" 3 1 0 709 16 1 0
-"266" 17 1 0 709 16 0 9
-"267" 6 1 0 709 16 1 46
-"268" 9 1 0 762 19 1 7
-"269" 2 1 0 763 16 1 0
-"270" 23 1 0 763 16 1 36
-"271" 3 1 0 763 16 1 35
-"272" 24 1 0 701 18 1 47
-"273" 3 1 0 701 18 1 9
-"274" 17 1 0 761 15 0 30
-"275" 22 1 1 644 13 0 3
-"276" 22 1 1 644 13 0 28
-"277" 43 0 1 575 5 1 35
-"278" 14 1 0 648 7 1 0
-"279" 23 1 0 648 7 1 5
-"280" 36 1 0 673 3 0 0
-"281" 12 1 0 673 3 1 1
-"282" 48 0 0 677 2 1 0
-"283" 13 1 0 686 3 1 0
-"284" 7 1 0 686 3 0 4
-"285" 28 1 0 684 2 0 0
-"286" 48 0 0 679 2 0 0
-"287" 48 0 0 692 3 0 0
-"288" 12 1 0 653 1 1 0
-"289" 11 1 0 653 1 1 18
-"290" 36 0 0 698 5 0 0
-"291" 36 0 0 702 5 0 0
-"292" 24 1 0 710 5 1 18
-"293" 12 1 0 710 5 0 8
-"294" 19 1 0 713 6 1 1
-"295" 16 1 0 713 6 0 14
-"296" 37 0 0 681 6 0 18
-"297" 5 1 0 705 5 0 19
-"298" 55 0 0 511 0 1 21
-"299" 20 1 0 518 0 1 0
-"300" 41 1 0 513 0 1 1
-"301" 2 1 0 513 0 1 1
-"302" 19 1 0 508 0 1 1
-"303" 33 1 0 508 0 1 1
-"304" 48 1 0 497 0 1 0
-"305" 12 1 0 497 0 1 5
-"306" 16 1 0 514 0 1 0
-"307" 51 0 0 506 0 1 0
-"308" 44 1 0 518 0 1 0
-"309" 7 1 0 556 3 0 4
-"310" 18 1 0 556 2 1 0
-"311" 7 1 0 556 2 1 0
-"312" 30 1 0 556 2 0 0
-"313" 50 0 0 536 1 1 0
-"314" 49 0 0 522 1 1 6
diff --git a/data/coalition.tab.gz b/data/coalition.tab.gz
new file mode 100755
index 0000000..9e8f06f
Binary files /dev/null and b/data/coalition.tab.gz differ
diff --git a/data/coalition2.txt.gz b/data/coalition2.txt.gz
new file mode 100755
index 0000000..0991618
Binary files /dev/null and b/data/coalition2.txt.gz differ
diff --git a/data/eidat.txt.gz b/data/eidat.txt.gz
new file mode 100755
index 0000000..b0c55e6
Binary files /dev/null and b/data/eidat.txt.gz differ
diff --git a/data/free1.tab.gz b/data/free1.tab.gz
new file mode 100755
index 0000000..4886e46
Binary files /dev/null and b/data/free1.tab.gz differ
diff --git a/data/free2.tab.gz b/data/free2.tab.gz
new file mode 100755
index 0000000..4886e46
Binary files /dev/null and b/data/free2.tab.gz differ
diff --git a/data/friendship.RData b/data/friendship.RData
new file mode 100755
index 0000000..054a142
Binary files /dev/null and b/data/friendship.RData differ
diff --git a/data/grunfeld.txt.gz b/data/grunfeld.txt.gz
new file mode 100755
index 0000000..334dd8b
Binary files /dev/null and b/data/grunfeld.txt.gz differ
diff --git a/data/hoff.tab.gz b/data/hoff.tab.gz
new file mode 100755
index 0000000..36e71b9
Binary files /dev/null and b/data/hoff.tab.gz differ
diff --git a/data/homerun.txt.gz b/data/homerun.txt.gz
new file mode 100755
index 0000000..1ba3ee5
Binary files /dev/null and b/data/homerun.txt.gz differ
diff --git a/data/immi1.tab.gz b/data/immi1.tab.gz
new file mode 100755
index 0000000..3fe1f05
Binary files /dev/null and b/data/immi1.tab.gz differ
diff --git a/data/immi2.tab.gz b/data/immi2.tab.gz
new file mode 100755
index 0000000..259d69a
Binary files /dev/null and b/data/immi2.tab.gz differ
diff --git a/data/immi3.tab.gz b/data/immi3.tab.gz
new file mode 100755
index 0000000..da4b8c5
Binary files /dev/null and b/data/immi3.tab.gz differ
diff --git a/data/immi4.tab.gz b/data/immi4.tab.gz
new file mode 100755
index 0000000..3d786a5
Binary files /dev/null and b/data/immi4.tab.gz differ
diff --git a/data/immi5.tab.gz b/data/immi5.tab.gz
new file mode 100755
index 0000000..4abd1da
Binary files /dev/null and b/data/immi5.tab.gz differ
diff --git a/data/immigration.tab.gz b/data/immigration.tab.gz
new file mode 100755
index 0000000..c016da4
Binary files /dev/null and b/data/immigration.tab.gz differ
diff --git a/data/klein.txt.gz b/data/klein.txt.gz
new file mode 100755
index 0000000..d1a80a4
Binary files /dev/null and b/data/klein.txt.gz differ
diff --git a/data/kmenta.txt.gz b/data/kmenta.txt.gz
new file mode 100755
index 0000000..f38a759
Binary files /dev/null and b/data/kmenta.txt.gz differ
diff --git a/data/macro.tab.gz b/data/macro.tab.gz
new file mode 100755
index 0000000..0931186
Binary files /dev/null and b/data/macro.tab.gz differ
diff --git a/data/mexico.tab.gz b/data/mexico.tab.gz
new file mode 100755
index 0000000..d725306
Binary files /dev/null and b/data/mexico.tab.gz differ
diff --git a/data/mid.tab.gz b/data/mid.tab.gz
new file mode 100755
index 0000000..3f1b10f
Binary files /dev/null and b/data/mid.tab.gz differ
diff --git a/data/newpainters.txt.gz b/data/newpainters.txt.gz
new file mode 100755
index 0000000..3289a7b
Binary files /dev/null and b/data/newpainters.txt.gz differ
diff --git a/data/sanction.tab b/data/sanction.tab
deleted file mode 100755
index 5157911..0000000
--- a/data/sanction.tab
+++ /dev/null
@@ -1,79 +0,0 @@
-"mil" "coop" "target" "import" "export" "cost" "num" "ncost"
-"1" 1 4 3 1 1 4  15 "major loss"
-"2" 0 2 3 0 1 3   4 "modest loss"
-"3" 0 1 3 1 0 2   1 "little effect"
-"4" 1 1 3 1 1 2   1 "little effect"
-"5" 0 1 3 1 1 2   1 "little effect"
-"6" 0 1 3 0 1 2   1 "little effect"
-"7" 1 2 2 0 1 2   3 "little effect"
-"8" 0 1 3 0 0 2   3 "little effect"
-"9" 0 2 1 0 0 1   2 "net gain"
-"10" 1 2 3 1 1 2   1 "little effect"
-"11" 1 1 2 0 0 1   1 "net gain"
-"12" 0 1 2 1 1 2   1 "little effect"
-"13" 0 3 1 1 1 2   8 "little effect"
-"14" 0 3 3 1 1 4   7 "major loss"
-"15" 0 3 2 1 1 3  21 "modest loss"
-"16" 0 1 2 0 0 1   1 "net gain"
-"17" 0 4 2 1 1 2   7 "little effect"
-"18" 0 3 3 0 0 2   4 "little effect"
-"19" 0 1 1 0 0 1   1 "net gain"
-"20" 0 3 3 1 0 3 120 "modest loss"
-"21" 0 4 3 0 0 2   7 "little effect"
-"22" 0 1 2 0 0 1   1 "net gain"
-"23" 0 1 2 1 1 4   1 "major loss"
-"24" 0 1 2 0 0 1   1 "net gain"
-"25" 0 1 1 0 0 1   1 "net gain"
-"26" 0 3 2 1 1 2  32 "little effect"
-"27" 0 1 2 1 0 2   1 "little effect"
-"28" 0 1 2 1 0 2   1 "little effect"
-"29" 0 1 2 0 0 1   1 "net gain"
-"30" 0 4 2 1 1 3 150 "modest loss"
-"31" 0 1 2 0 0 1   1 "net gain"
-"32" 0 1 2 0 0 1   1 "net gain"
-"33" 0 1 1 0 0 1   1 "net gain"
-"34" 0 1 2 0 1 1   5 "net gain"
-"35" 0 2 1 1 1 2   2 "little effect"
-"36" 0 3 3 0 1 1  10 "net gain"
-"37" 0 1 2 0 0 1   1 "net gain"
-"38" 0 1 1 0 0 1   1 "net gain"
-"39" 0 1 2 0 0 1   1 "net gain"
-"40" 0 2 3 0 1 2   2 "little effect"
-"41" 0 2 2 0 1 2   1 "little effect"
-"42" 0 2 3 0 0 2   2 "little effect"
-"43" 0 1 3 1 0 2   1 "little effect"
-"44" 0 2 3 0 1 2   1 "little effect"
-"45" 0 1 1 1 1 1   1 "net gain"
-"46" 0 1 2 0 1 1   1 "net gain"
-"47" 0 1 3 0 1 2   1 "little effect"
-"48" 0 2 1 1 0 1   1 "net gain"
-"49" 0 1 3 0 0 1   1 "net gain"
-"50" 0 1 2 0 0 1   1 "net gain"
-"51" 0 1 2 0 1 2   1 "little effect"
-"52" 0 1 3 0 1 2   1 "little effect"
-"53" 0 1 1 0 1 1   1 "net gain"
-"54" 0 1 1 0 0 1   2 "net gain"
-"55" 0 1 2 0 0 1   1 "net gain"
-"56" 0 1 2 0 1 2   1 "little effect"
-"57" 0 2 2 0 1 2   3 "little effect"
-"58" 0 2 3 0 1 2   2 "little effect"
-"59" 0 2 3 0 1 2   2 "little effect"
-"60" 0 3 2 1 1 3   9 "modest loss"
-"61" 1 3 2 0 0 1   7 "net gain"
-"62" 0 1 3 1 1 3   1 "modest loss"
-"63" 0 3 1 1 1 3  10 "modest loss"
-"64" 0 2 2 0 0 1   2 "net gain"
-"65" 0 3 3 1 1 2   8 "little effect"
-"66" 0 2 1 0 0 1   2 "net gain"
-"67" 0 3 3 0 1 3  13 "modest loss"
-"68" 0 1 2 0 1 2   1 "little effect"
-"69" 0 1 2 1 0 2   1 "little effect"
-"70" 0 3 1 1 1 2   4 "little effect"
-"71" 0 2 3 0 1 3   1 "modest loss"
-"72" 0 2 2 0 0 1   8 "net gain"
-"73" 1 3 1 1 1 2  14 "little effect"
-"74" 0 2 1 0 0 1   2 "net gain"
-"75" 0 1 3 0 1 2   1 "little effect"
-"76" 0 4 3 1 0 2  13 "little effect"
-"77" 0 1 2 0 0 1   1 "net gain"
-"78" 1 3 1 1 1 2  10 "little effect"
diff --git a/data/sanction.tab.gz b/data/sanction.tab.gz
new file mode 100755
index 0000000..263e679
Binary files /dev/null and b/data/sanction.tab.gz differ
diff --git a/data/seatshare.rda b/data/seatshare.rda
new file mode 100644
index 0000000..ef62ebd
Binary files /dev/null and b/data/seatshare.rda differ
diff --git a/data/sna.ex.RData b/data/sna.ex.RData
new file mode 100755
index 0000000..b80635c
Binary files /dev/null and b/data/sna.ex.RData differ
diff --git a/data/swiss.txt.gz b/data/swiss.txt.gz
new file mode 100755
index 0000000..6c8f9ff
Binary files /dev/null and b/data/swiss.txt.gz differ
diff --git a/data/tobin.txt.gz b/data/tobin.txt.gz
new file mode 100755
index 0000000..cc932bc
Binary files /dev/null and b/data/tobin.txt.gz differ
diff --git a/data/turnout.tab.gz b/data/turnout.tab.gz
new file mode 100755
index 0000000..af3cefe
Binary files /dev/null and b/data/turnout.tab.gz differ
diff --git a/data/voteincome.txt.gz b/data/voteincome.txt.gz
new file mode 100755
index 0000000..7aceff1
Binary files /dev/null and b/data/voteincome.txt.gz differ
diff --git a/debian/changelog b/debian/changelog
index cd5607a..0051988 100644
--- a/debian/changelog
+++ b/debian/changelog
@@ -1,3 +1,9 @@
+r-cran-zeligchoice (5.1.5+git20201212.1.f16809b-1) UNRELEASED; urgency=low
+
+  * New upstream snapshot.
+
+ -- Debian Janitor <janitor@jelmer.uk>  Sat, 22 Oct 2022 19:41:43 -0000
+
 r-cran-zeligchoice (0.9-6-3) unstable; urgency=medium
 
   * Standards-Version: 4.5.0 (routine-update)
diff --git a/demo/00Index b/demo/00Index
deleted file mode 100644
index e3036a7..0000000
--- a/demo/00Index
+++ /dev/null
@@ -1,5 +0,0 @@
-demo-blogit			example of bivariate logit model
-demo-bprobit		example of bivariate probit model
-demo-mlogit   		example of multinomial logit regression model
-demo-ologit			example of ordered logit regression model
-demo-oprobit  		example of ordered probit regression model
diff --git a/demo/demo-blogit.R b/demo/demo-blogit.R
deleted file mode 100755
index e397d9d..0000000
--- a/demo/demo-blogit.R
+++ /dev/null
@@ -1,24 +0,0 @@
-# Zelig 4 code:
-# library(Zelig4)
-library(ZeligChoice)
-data(sanction)
-z.out1 <- zelig(cbind(import, export) ~ coop + cost + target,
-                model = "blogit", data = sanction)
-summary(z.out1)
-x.low <- setx(z.out1, cost = 1)
-set.seed(42)
-s.out1 <- sim(z.out1, x.low, num=100)
-summary(s.out1)
-
-# Zelig 5 code:
-data(sanction)
-z5 <- zblogit$new()
-z5$zelig(cbind(import, export) ~ coop + cost + target, data = sanction)
-z.out2 <- zelig(cbind(import, export) ~ coop + cost + target, model = "blogit", data = sanction)
-
-z5
-z5$setx(cost = 1)
-z5
-set.seed(42)
-z5$sim(num = 100)
-z5$cite()
diff --git a/demo/demo-bprobit.R b/demo/demo-bprobit.R
deleted file mode 100644
index c76f294..0000000
--- a/demo/demo-bprobit.R
+++ /dev/null
@@ -1,26 +0,0 @@
-library(VGAM)
-
-# Zelig 4 code:
-library(Zelig)
-library(ZeligChoice)
-data(sanction)
-z.out1 <- zelig(cbind(import, export) ~ coop + cost + target,
-                model = "bprobit", data = sanction)
-summary(z.out1)
-x.low <- setx(z.out1, cost = 1)
-set.seed(42)
-s.out1 <- sim(z.out1, x = x.low)
-summary(s.out1)
-
-# Zelig 5 code:
-data(sanction)
-z5 <- zbprobit$new()
-z5$zelig(cbind(import, export) ~ coop + cost + target, data = sanction)
-z5
-z5$setx(cost = 1)
-set.seed(42)
-z5$sim(num = 1000)
-z5$summarize()
-z5$cite()
-
-# z5$zelig(list(import ~ coop + cost + target, export ~ coop + cost + target), data = sanction)
diff --git a/demo/demo-mlogit.R b/demo/demo-mlogit.R
deleted file mode 100644
index a652efe..0000000
--- a/demo/demo-mlogit.R
+++ /dev/null
@@ -1,35 +0,0 @@
-library(VGAM)
-
-# Zelig 4 code:
-library(Zelig4)
-library(ZeligChoice4)
-data(mexico)
-
-z.out1 <- Zelig4::zelig(as.factor(vote88) ~ pristr + othcok + othsocok,
-                        model = "mlogit", data = mexico)
-summary(z.out1)
-x.weak <- Zelig4::setx(z.out1, pristr = 1)
-x.strong <- Zelig4::setx(z.out1, pristr = 3)
-x.out <- Zelig4::setx(z.out1)
-set.seed(42)
-s.out1 <- Zelig4::sim(z.out1, x = x.out)
-summary(s.out1)
-
-v <- VGAM::vglm(formula = as.factor(vote88) ~ pristr + othcok + 
-                othsocok, data = mexico, family = "multinomial")
-
-# Zelig 5 code:
-data(mexico)
-z5 <- zmlogit$new()
-z5
-z5$zelig(as.factor(vote88) ~ pristr + othcok + othsocok, data = mexico)
-z5
-z5$setx()
-set.seed(42)
-z5$sim(num = 1000)
-z5$sim.out
-z5$summarize()
-z5$cite()
-
-# z5$zelig(list(import ~ coop + cost + target, export ~ coop + cost + target), data = sanction)
-
diff --git a/demo/demo-ologit.R b/demo/demo-ologit.R
deleted file mode 100644
index 98479eb..0000000
--- a/demo/demo-ologit.R
+++ /dev/null
@@ -1,56 +0,0 @@
-library(VGAM)
-
-# Zelig 4 code:
-library(Zelig4)
-library(ZeligChoice4)
-data(sanction)
-sanction$ncost <- factor(sanction$ncost, ordered = TRUE,
-                         levels = c("net gain", "little effect",
-                                    "modest loss", "major loss"))
-z.out <- Zelig4::zelig(ncost ~ mil + coop, model = "ologit", data = sanction)
-summary(z.out)
-x.out <- Zelig4::setx(z.out, fn = NULL)
-set.seed(42)
-s.out <- Zelig4::sim(z.out, x = x.out, num = 100)
-summary(s.out)
-
-# Zelig 5 code:
-data(sanction)
-z5 <- zologit$new()
-z5
-z5$zelig(ncost ~ mil + coop, data = sanction)
-z5
-z5$setx(coop = 1:3)
-
-z5$setx()
-
-z.out <- z5$zelig.out$z.out[[1]]
-
-set.seed(42)
-z5$sim(num = 100)
-z5$sim.out
-z5$summarize()
-z5$cite()
-
-z5 <- zologit$new()
-z5
-z5$zelig(ncost ~ mil + coop, data = sanction, by = "export")
-z5
-z5$setx()
-
-z.out <- z5$zelig.out$z.out[[1]]
-
-set.seed(42)
-z5$sim(num = 100)
-z5$sim.out
-z5$summarize()
-z5$cite()
-
-
-fit <- MASS::polr(formula = as.factor(ncost) ~ mil + coop, data = sanction, method = "logistic", Hess = TRUE)
-summary(fit)
-
-fit2 <- MASS::polr(formula = ncost ~ mil + coop, data = sanction, method = "logistic", Hess = TRUE)
-summary(fit2)
-# z5$zelig(list(import ~ coop + cost + target, export ~ coop + cost + target), data = sanction)
-
diff --git a/demo/demo-oprobit.R b/demo/demo-oprobit.R
deleted file mode 100644
index ba2c139..0000000
--- a/demo/demo-oprobit.R
+++ /dev/null
@@ -1,35 +0,0 @@
-library(VGAM)
-
-## Results don't match: Zelig 4 seems to be using the logit inverse link in the probit model
-
-# Zelig 4 code:
-library(Zelig4)
-library(ZeligChoice4)
-data(sanction)
-sanction$ncost <- factor(sanction$ncost, ordered = TRUE,
-                         levels = c("net gain", "little effect",
-                                    "modest loss", "major loss"))
-z.out <- Zelig4::zelig(ncost ~ mil + coop, model = "oprobit", data = sanction)
-summary(z.out)
-x.out <- Zelig4::setx(z.out, fn = NULL)
-set.seed(42)
-s.out <- Zelig4::sim(z.out, x = x.out, num = 5)
-summary(s.out)
-
-# Zelig 5 code:
-data(sanction)
-z5 <- zoprobit$new()
-z5
-z5$zelig(ncost ~ mil + coop, data = sanction)
-z5
-z5$setrange(sanction = 1)
-z5
-z5$sim(num = 100)
-z5
-z5$setx()
-
-set.seed(42)
-z5$sim(num = 5)
-z5$sim.out
-z5$summarize()
-z5$cite()
diff --git a/inst/CITATION b/inst/CITATION
new file mode 100644
index 0000000..e109dc9
--- /dev/null
+++ b/inst/CITATION
@@ -0,0 +1,35 @@
+citHeader("To cite Zelig in publications please use:")
+
+if(!exists("meta") || is.null(meta)) meta <- packageDescription("Zelig")
+year <- sub(".*(2[[:digit:]]{3})-.*", "\\1", meta$Date)
+vers <- paste("Version", meta$Version)  
+
+bibentry(
+            bibtype="Manual",
+            title = "Zelig: Everyone's Statistical Software",
+            author = c(
+            	person("Christine", "Choirat", email="cchoirat@iq.harvard.edu", role = "aut"),
+            	person("James", "Honaker", email="jhonaker@iq.harvard.edu", role = "aut"),
+            	person("Kosuke", "Imai", role = "aut"),
+                person("Gary", "King", role = "aut"),
+                person("Olivia", "Lau", role = "aut")
+                ),
+            year = year,
+            note = vers,
+            url = "https://zeligproject.org/")
+
+
+bibentry(
+            bibtype="Article",
+            title = "Toward A Common Framework for Statistical Analysis and Development",
+            author = c(
+            	person("Kosuke", "Imai"),
+                person("Gary", "King"),
+                person("Olivia", "Lau")
+                ),
+            journal = "Journal of Computational Graphics and Statistics",
+            volume = 17,
+            number = 4,
+            year = 2008,
+            pages = "892-913",
+            url =  "https://gking.harvard.edu/files/abs/z-abs.shtml")
diff --git a/inst/JSON/zelig5choicemodels.json b/inst/JSON/zelig5choicemodels.json
deleted file mode 100644
index 6e4aa68..0000000
--- a/inst/JSON/zelig5choicemodels.json
+++ /dev/null
@@ -1,69 +0,0 @@
-{
-  "zelig5choicemodels": {
-    "blogit": {
-      "name": ["blogit"],
-      "description": ["Bivariate Logit Regression for Dichotomous Dependent Variables"],
-      "outcome": {
-        "modelingType": [""]
-      },
-      "explanatory": {
-        "modelingType": ["continuous", "discrete", "nominal", "ordinal", "binary"]
-      },
-      "vignette.url": ["http://docs.zeligproject.org/articles/zeligchoice_blogit.html"],
-      "wrapper": ["blogit"],
-      "tree": ["Zelig-blogit", "Zelig-bbinchoice"]
-    },
-    "bprobit": {
-      "name": ["bprobit"],
-      "description": ["Bivariate Probit Regression for Dichotomous Dependent Variables"],
-      "outcome": {
-        "modelingType": [""]
-      },
-      "explanatory": {
-        "modelingType": ["continuous", "discrete", "nominal", "ordinal", "binary"]
-      },
-      "vignette.url": ["http://docs.zeligproject.org/articles/zeligchoice_bprobit.html"],
-      "wrapper": ["bprobit"],
-      "tree": ["Zelig-bprobit", "Zelig-bbinchoice"]
-    },
-    "mlogit": {
-      "name": ["mlogit"],
-      "description": ["Multinomial Logistic Regression for Dependent Variables with Unordered Categorical Values"],
-      "outcome": {
-        "modelingType": [""]
-      },
-      "explanatory": {
-        "modelingType": ["continuous", "discrete", "nominal", "ordinal", "binary"]
-      },
-      "vignette.url": ["http://docs.zeligproject.org/articles/zeligchoice_mlogit.html"],
-      "wrapper": ["mlogit"],
-      "tree": ["Zelig-mlogit"]
-    },
-    "ologit": {
-      "name": ["ologit"],
-      "description": ["Ordinal Logit Regression for Ordered Categorical Dependent Variables"],
-      "outcome": {
-        "modelingType": [""]
-      },
-      "explanatory": {
-        "modelingType": ["continuous", "discrete", "nominal", "ordinal", "binary"]
-      },
-      "vignette.url": ["http://docs.zeligproject.org/articles/zeligchoice_ologit.html"],
-      "wrapper": ["ologit"],
-      "tree": ["Zelig-ologit", "Zelig-obinchoice"]
-    },
-    "oprobit": {
-      "name": ["oprobit"],
-      "description": ["Ordinal Probit Regression for Ordered Categorical Dependent Variables"],
-      "outcome": {
-        "modelingType": [""]
-      },
-      "explanatory": {
-        "modelingType": ["continuous", "discrete", "nominal", "ordinal", "binary"]
-      },
-      "vignette.url": ["http://docs.zeligproject.org/articles/zeligchoice_oprobit.html"],
-      "wrapper": ["oprobit"],
-      "tree": ["Zelig-oprobit", "Zelig-obinchoice"]
-    }
-  }
-}
diff --git a/inst/JSON/zelig5models.json b/inst/JSON/zelig5models.json
new file mode 100644
index 0000000..290f4bd
--- /dev/null
+++ b/inst/JSON/zelig5models.json
@@ -0,0 +1,460 @@
+{
+  "zelig5models": {
+    "ls": {
+      "name": ["ls"],
+      "description": ["Least Squares Regression for Continuous Dependent Variables"],
+      "outcome": {
+        "modelingType": ["continous"]
+      },
+      "explanatory": {
+        "modelingType": ["continuous", "discrete", "nominal", "ordinal", "binary"]
+      },
+      "vignette.url": ["http://docs.zeligproject.org/articles/zelig_ls.html"],
+      "wrapper": ["ls"],
+      "tree": ["Zelig-ls"]
+    },
+    "ivreg": {
+      "name": ["ivreg"],
+      "description": ["Instrumental-Variable Regression"],
+      "outcome": {
+        "modelingType": ["continous"]
+      },
+      "explanatory": {
+        "modelingType": ["continuous", "discrete", "nominal", "ordinal", "binary"]
+      },
+      "vignette.url": ["http://docs.zeligproject.org/articles/zelig_ivreg.html"],
+      "wrapper": ["ivreg"],
+      "tree": ["Zelig-ivreg"]
+    },
+    "logit": {
+      "name": ["logit"],
+      "description": ["Logistic Regression for Dichotomous Dependent Variables"],
+      "outcome": {
+        "modelingType": ["binary"]
+      },
+      "explanatory": {
+        "modelingType": ["continuous", "discrete", "nominal", "ordinal", "binary"]
+      },
+      "vignette.url": ["http://docs.zeligproject.org/articles/zelig_logit.html"],
+      "wrapper": ["logit"],
+      "tree": ["Zelig-logit", "Zelig-binchoice", "Zelig-glm"]
+    },
+    "probit": {
+      "name": ["probit"],
+      "description": ["Probit Regression for Dichotomous Dependent Variables"],
+      "outcome": {
+        "modelingType": ["binary"]
+      },
+      "explanatory": {
+        "modelingType": ["continuous", "discrete", "nominal", "ordinal", "binary"]
+      },
+      "vignette.url": ["http://docs.zeligproject.org/articles/zelig_probit.html"],
+      "wrapper": ["probit"],
+      "tree": ["Zelig-probit", "Zelig-binchoice", "Zelig-glm"]
+    },
+    "poisson": {
+      "name": ["poisson"],
+      "description": ["Poisson Regression for Event Count Dependent Variables"],
+      "outcome": {
+        "modelingType": ["discrete"]
+      },
+      "explanatory": {
+        "modelingType": ["continuous", "discrete", "nominal", "ordinal", "binary"]
+      },
+      "vignette.url": ["http://docs.zeligproject.org/articles/zelig_poisson.html"],
+      "wrapper": ["poisson"],
+      "tree": ["Zelig-poisson", "Zelig-glm"]
+    },
+    "normal": {
+      "name": ["normal"],
+      "description": ["Normal Regression for Continuous Dependent Variables"],
+      "outcome": {
+        "modelingType": ["continuous"]
+      },
+      "explanatory": {
+        "modelingType": ["continuous", "discrete", "nominal", "ordinal", "binary"]
+      },
+      "vignette.url": ["http://docs.zeligproject.org/articles/zelig_normal.html"],
+      "wrapper": ["normal"],
+      "tree": ["Zelig-normal", "Zelig-glm"]
+    },
+    "gamma": {
+      "name": ["gamma"],
+      "description": ["Gamma Regression for Continuous, Positive Dependent Variables"],
+      "outcome": {
+        "modelingType": ["continous"]
+      },
+      "explanatory": {
+        "modelingType": ["continuous", "discrete", "nominal", "ordinal", "binary"]
+      },
+      "vignette.url": ["http://docs.zeligproject.org/articles/zelig_gamma.html"],
+      "wrapper": ["gamma"],
+      "tree": ["Zelig-gamma", "Zelig-glm"]
+    },
+    "negbin": {
+      "name": ["negbin"],
+      "description": ["Negative Binomial Regression for Event Count Dependent Variables"],
+      "outcome": {
+        "modelingType": ["discrete"]
+      },
+      "explanatory": {
+        "modelingType": ["continuous", "discrete", "nominal", "ordinal", "binary"]
+      },
+      "vignette.url": ["http://docs.zeligproject.org/articles/zelig_negbin.html"],
+      "wrapper": ["negbin"],
+      "tree": ["Zelig-negbin"]
+    },
+    "exp": {
+      "name": ["exp"],
+      "description": ["Exponential Regression for Duration Dependent Variables"],
+      "outcome": {
+        "modelingType": ["continous"]
+      },
+      "explanatory": {
+        "modelingType": ["continuous", "discrete", "nominal", "ordinal", "binary"]
+      },
+      "vignette.url": ["http://docs.zeligproject.org/articles/zelig_exp.html"],
+      "wrapper": ["exp"],
+      "tree": ["Zelig-exp"]
+    },
+    "lognorm": {
+      "name": ["lognorm"],
+      "description": ["Log-Normal Regression for Duration Dependent Variables"],
+      "outcome": {
+        "modelingType": ["discrete"]
+      },
+      "explanatory": {
+        "modelingType": ["continuous", "discrete", "nominal", "ordinal", "binary"]
+      },
+      "vignette.url": ["http://docs.zeligproject.org/articles/zelig_lognorm.html"],
+      "wrapper": ["lognorm"],
+      "tree": ["Zelig-lognorm"]
+    },
+    "tobit": {
+      "name": ["tobit"],
+      "description": ["Linear regression for Left-Censored Dependent Variable"],
+      "outcome": {
+        "modelingType": ["continous"]
+      },
+      "explanatory": {
+        "modelingType": ["continuous", "discrete", "nominal", "ordinal", "binary"]
+      },
+      "vignette.url": ["http://docs.zeligproject.org/articles/zelig_tobit.html"],
+      "wrapper": ["tobit"],
+      "tree": ["Zelig-tobit"]
+    },
+    "quantile": {
+      "name": ["quantile"],
+      "description": ["Quantile Regression for Continuous Dependent Variables"],
+      "outcome": {
+        "modelingType": ["continuous"]
+      },
+      "explanatory": {
+        "modelingType": ["continuous", "discrete", "nominal", "ordinal", "binary"]
+      },
+      "vignette.url": ["http://docs.zeligproject.org/articles/zelig_quantile.html"],
+      "wrapper": ["rq"],
+      "tree": ["Zelig-quantile"]
+    },
+    "relogit": {
+      "name": ["relogit"],
+      "description": ["Rare Events Logistic Regression for Dichotomous Dependent Variables"],
+      "outcome": {
+        "modelingType": [""]
+      },
+      "explanatory": {
+        "modelingType": ["continuous", "discrete", "nominal", "ordinal", "binary"]
+      },
+      "vignette.url": ["http://docs.zeligproject.org/articles/zelig_relogit.html"],
+      "wrapper": ["relogit"],
+      "tree": ["Zelig-relogit"]
+    },
+    "logitgee": {
+      "name": ["logit-gee"],
+      "description": ["General Estimating Equation for Logistic Regression"],
+      "outcome": {
+        "modelingType": [""]
+      },
+      "explanatory": {
+        "modelingType": ["continuous", "discrete", "nominal", "ordinal", "binary"]
+      },
+      "vignette.url": ["http://docs.zeligproject.org/articles/zelig_logitgee.html"],
+      "wrapper": ["logit.gee"],
+      "tree": ["Zelig-logit-gee", "Zelig-binchoice-gee", "Zelig-gee", "Zelig-binchoice"]
+    },
+    "probitgee": {
+      "name": ["probit-gee"],
+      "description": ["General Estimating Equation for Probit Regression"],
+      "outcome": {
+        "modelingType": [""]
+      },
+      "explanatory": {
+        "modelingType": ["continuous", "discrete", "nominal", "ordinal", "binary"]
+      },
+      "vignette.url": ["http://docs.zeligproject.org/articles/zelig_probitgee.html"],
+      "wrapper": ["probit.gee"],
+      "tree": ["Zelig-probit-gee", "Zelig-binchoice-gee", "Zelig-gee", "Zelig-binchoice"]
+    },
+    "gammagee": {
+      "name": ["gamma-gee"],
+      "description": ["General Estimating Equation for Gamma Regression"],
+      "outcome": {
+        "modelingType": [""]
+      },
+      "explanatory": {
+        "modelingType": ["continuous", "discrete", "nominal", "ordinal", "binary"]
+      },
+      "vignette.url": ["http://docs.zeligproject.org/articles/zelig_gammagee.html"],
+      "wrapper": ["gamma.gee"],
+      "tree": ["Zelig-gamma-gee", "Zelig-gee", "Zelig-gamma"]
+    },
+    "normalgee": {
+      "name": ["normal-gee"],
+      "description": ["General Estimating Equation for Normal Regression"],
+      "outcome": {
+        "modelingType": [""]
+      },
+      "explanatory": {
+        "modelingType": ["continuous", "discrete", "nominal", "ordinal", "binary"]
+      },
+      "vignette.url": ["http://docs.zeligproject.org/articles/zelig_normalgee.html"],
+      "wrapper": ["normal.gee"],
+      "tree": ["Zelig-normal-gee", "Zelig-gee", "Zelig-normal"]
+    },
+    "poissongee": {
+      "name": ["poisson-gee"],
+      "description": ["General Estimating Equation for Poisson Regression"],
+      "outcome": {
+        "modelingType": [""]
+      },
+      "explanatory": {
+        "modelingType": ["continuous", "discrete", "nominal", "ordinal", "binary"]
+      },
+      "vignette.url": ["http://docs.zeligproject.org/articles/zelig_poissongee.html"],
+      "wrapper": ["poisson.gee"],
+      "tree": ["Zelig-poisson-gee", "Zelig-gee", "Zelig-poisson"]
+    },
+    "factorbayes": {
+      "name": ["factor-bayes"],
+      "description": ["Bayesian Factor Analysis"],
+      "outcome": {
+        "modelingType": [""]
+      },
+      "explanatory": {
+        "modelingType": ["continuous", "discrete", "nominal", "ordinal", "binary"]
+      },
+      "vignette.url": ["http://docs.zeligproject.org/articles/zelig_factorbayes.html"],
+      "wrapper": ["factor.bayes"],
+      "tree": ["Zelig-factor-bayes"]
+    },
+    "logitbayes": {
+      "name": ["logit-bayes"],
+      "description": ["Bayesian Logistic Regression for Dichotomous Dependent Variables"],
+      "outcome": {
+        "modelingType": [""]
+      },
+      "explanatory": {
+        "modelingType": ["continuous", "discrete", "nominal", "ordinal", "binary"]
+      },
+      "vignette.url": ["http://docs.zeligproject.org/articles/zelig_logitbayes.html"],
+      "wrapper": ["logit.bayes"],
+      "tree": ["Zelig-logit-bayes", "Zelig-bayes", "Zelig-logit"]
+    },
+    "mlogitbayes": {
+      "name": ["mlogit-bayes"],
+      "description": ["Bayesian Multinomial Logistic Regression for Dependent Variables with Unordered Categorical Values"],
+      "outcome": {
+        "modelingType": [""]
+      },
+      "explanatory": {
+        "modelingType": ["continuous", "discrete", "nominal", "ordinal", "binary"]
+      },
+      "vignette.url": ["http://docs.zeligproject.org/articles/zelig_mlogitbayes.html"],
+      "wrapper": ["mlogit.bayes"],
+      "tree": ["Zelig-mlogit-bayes", "Zelig-bayes"]
+    },
+    "normalbayes": {
+      "name": ["normal-bayes"],
+      "description": ["Bayesian Normal Linear Regression"],
+      "outcome": {
+        "modelingType": [""]
+      },
+      "explanatory": {
+        "modelingType": ["continuous", "discrete", "nominal", "ordinal", "binary"]
+      },
+      "vignette.url": ["http://docs.zeligproject.org/articles/zelig_normalbayes.html"],
+      "wrapper": ["normal.bayes"],
+      "tree": ["Zelig-normal-bayes", "Zelig-bayes", "Zelig-normal"]
+    },
+    "oprobitbayes": {
+      "name": ["oprobit-bayes"],
+      "description": ["Bayesian Probit Regression for Dichotomous Dependent Variables"],
+      "outcome": {
+        "modelingType": [""]
+      },
+      "explanatory": {
+        "modelingType": ["continuous", "discrete", "nominal", "ordinal", "binary"]
+      },
+      "vignette.url": ["http://docs.zeligproject.org/articles/zelig_oprobitbayes.html"],
+      "wrapper": ["oprobit.bayes"],
+      "tree": ["Zelig-oprobit-bayes", "Zelig-bayes"]
+    },
+    "poissonbayes": {
+      "name": ["poisson-bayes"],
+      "description": ["Bayesian Poisson Regression"],
+      "outcome": {
+        "modelingType": [""]
+      },
+      "explanatory": {
+        "modelingType": ["continuous", "discrete", "nominal", "ordinal", "binary"]
+      },
+      "vignette.url": ["http://docs.zeligproject.org/articles/zelig_poissonbayes.html"],
+      "wrapper": ["poisson.bayes"],
+      "tree": ["Zelig-poisson-bayes", "Zelig-bayes", "Zelig-poisson"]
+    },
+    "probitbayes": {
+      "name": ["probit-bayes"],
+      "description": ["Bayesian Probit Regression for Dichotomous Dependent Variables"],
+      "outcome": {
+        "modelingType": [""]
+      },
+      "explanatory": {
+        "modelingType": ["continuous", "discrete", "nominal", "ordinal", "binary"]
+      },
+      "vignette.url": ["http://docs.zeligproject.org/articles/zelig_probitbayes.html"],
+      "wrapper": ["probit.bayes"],
+      "tree": ["Zelig-probit-bayes", "Zelig-bayes", "Zelig-probit"]
+    },
+    "tobitbayes": {
+      "name": ["tobit-bayes"],
+      "description": ["Bayesian Tobit Regression for a Censored Dependent Variable"],
+      "outcome": {
+        "modelingType": [""]
+      },
+      "explanatory": {
+        "modelingType": ["continuous", "discrete", "nominal", "ordinal", "binary"]
+      },
+      "vignette.url": ["http://docs.zeligproject.org/articles/zelig_tobitbayes.html"],
+      "wrapper": ["tobit.bayes"],
+      "tree": ["Zelig-tobit-bayes", "Zelig-bayes", "Zelig-tobit"]
+    },
+    "weibull": {
+      "name": ["weibull"],
+      "description": ["Weibull Regression for Duration Dependent Variables"],
+      "outcome": {
+        "modelingType": ["bounded"]
+      },
+      "explanatory": {
+        "modelingType": ["continuous", "discrete", "nominal", "ordinal", "binary"]
+      },
+      "vignette.url": ["http://docs.zeligproject.org/articles/zelig_weibull.html"],
+      "wrapper": ["weibull"],
+      "tree": ["Zelig-weibull"]
+    },
+    "logitsurvey": {
+      "name": ["logit-survey"],
+      "description": ["Logistic Regression with Survey Weights"],
+      "outcome": {
+        "modelingType": [""]
+      },
+      "explanatory": {
+        "modelingType": ["continuous", "discrete", "nominal", "ordinal", "binary"]
+      },
+      "vignette.url": ["http://docs.zeligproject.org/articles/zelig_logit-survey.html"],
+      "wrapper": ["logit.survey"],
+      "tree": ["Zelig-logit-survey", "Zelig-binchoice-survey", "Zelig-survey", "Zelig-binchoice"]
+    },
+    "probitsurvey": {
+      "name": ["probit-survey"],
+      "description": ["Probit Regression with Survey Weights"],
+      "outcome": {
+        "modelingType": [""]
+      },
+      "explanatory": {
+        "modelingType": ["continuous", "discrete", "nominal", "ordinal", "binary"]
+      },
+      "vignette.url": ["http://docs.zeligproject.org/articles/zelig_probit-survey.html"],
+      "wrapper": ["probit.survey"],
+      "tree": ["Zelig-probit-survey", "Zelig-binchoice-survey", "Zelig-survey", "Zelig-binchoice"]
+    },
+    "normalsurvey": {
+      "name": ["normal-survey"],
+      "description": ["Normal Regression for Continuous Dependent Variables with Survey Weights"],
+      "outcome": {
+        "modelingType": ["continuous"]
+      },
+      "explanatory": {
+        "modelingType": ["continuous", "discrete", "nominal", "ordinal", "binary"]
+      },
+      "vignette.url": ["http://docs.zeligproject.org/articles/zelig_normal-survey.html"],
+      "wrapper": ["normal.survey"],
+      "tree": ["Zelig-normal-survey", "Zelig-survey"]
+    },
+    "gammasurvey": {
+      "name": ["gamma-survey"],
+      "description": ["Gamma Regression with Survey Weights"],
+      "outcome": {
+        "modelingType": [""]
+      },
+      "explanatory": {
+        "modelingType": ["continuous", "discrete", "nominal", "ordinal", "binary"]
+      },
+      "vignette.url": ["http://docs.zeligproject.org/articles/zelig_gamma-survey.html"],
+      "wrapper": ["gamma.survey"],
+      "tree": ["Zelig-gamma-survey", "Zelig-survey", "Zelig-gamma"]
+    },
+    "poissonsurvey": {
+      "name": ["poisson-survey"],
+      "description": ["Poisson Regression with Survey Weights"],
+      "outcome": {
+        "modelingType": [""]
+      },
+      "explanatory": {
+        "modelingType": ["continuous", "discrete", "nominal", "ordinal", "binary"]
+      },
+      "vignette.url": ["http://docs.zeligproject.org/articles/zelig_poisson-survey.html"],
+      "wrapper": ["poisson.survey"],
+      "tree": ["Zelig-poisson-survey", "Zelig-survey", "Zelig-poisson"]
+    },
+    "arima": {
+      "name": ["arima"],
+      "description": ["Autoregressive Moving-Average Models for Time-Series Data"],
+      "outcome": {
+        "modelingType": ["continuous"]
+      },
+      "explanatory": {
+        "modelingType": ["continuous", "discrete", "nominal", "ordinal", "binary"]
+      },
+      "vignette.url": ["http://docs.zeligproject.org/articles/zelig_arima.html"],
+      "wrapper": ["arima"],
+      "tree": ["Zelig-arima", "Zelig-timeseries"]
+    },
+    "ma": {
+      "name": ["ma"],
+      "description": ["Time-Series Model with Moving Average"],
+      "outcome": {
+        "modelingType": ["continuous"]
+      },
+      "explanatory": {
+        "modelingType": ["continuous", "discrete", "nominal", "ordinal", "binary"]
+      },
+      "vignette.url": ["http://docs.zeligproject.org/articles/zelig_ma.html"],
+      "wrapper": ["ma"],
+      "tree": ["Zelig-ma", "Zelig-timeseries"]
+    },
+    "ar": {
+      "name": ["ar"],
+      "description": ["Time-Series Model with Autoregressive Disturbance"],
+      "outcome": {
+        "modelingType": ["continuous"]
+      },
+      "explanatory": {
+        "modelingType": ["continuous", "discrete", "nominal", "ordinal", "binary"]
+      },
+      "vignette.url": ["http://docs.zeligproject.org/articles/zelig_ar.html"],
+      "wrapper": ["ar"],
+      "tree": ["Zelig-ar", "Zelig-timeseries"]
+    }
+  }
+}
+
diff --git a/man/ATT.Rd b/man/ATT.Rd
new file mode 100644
index 0000000..9a900ec
--- /dev/null
+++ b/man/ATT.Rd
@@ -0,0 +1,35 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/wrappers.R
+\name{ATT}
+\alias{ATT}
+\title{Compute simulated (sample) average treatment effects on the treated from
+a Zelig model estimation}
+\usage{
+ATT(object, treatment, treated = 1, num = NULL)
+}
+\arguments{
+\item{object}{an object of class Zelig}
+
+\item{treatment}{character string naming the variable that denotes the
+treatment and non-treated groups.}
+
+\item{treated}{value of \code{treatment} variable indicating treatment}
+
+\item{num}{number of simulations to run. Default is 1000.}
+}
+\description{
+Compute simulated (sample) average treatment effects on the treated from
+a Zelig model estimation
+}
+\examples{
+library(dplyr)
+data(sanction)
+z.att <- zelig(num ~ target + coop + mil, model = "poisson",
+                 data = sanction) \%>\%
+             ATT(treatment = "mil") \%>\%
+             get_qi(qi = "ATT", xvalue = "TE")
+
+}
+\author{
+Christopher Gandrud
+}
diff --git a/man/CigarettesSW.Rd b/man/CigarettesSW.Rd
new file mode 100644
index 0000000..205c4bc
--- /dev/null
+++ b/man/CigarettesSW.Rd
@@ -0,0 +1,16 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/datasets.R
+\docType{data}
+\name{CigarettesSW}
+\alias{CigarettesSW}
+\title{Cigarette Consumption Panel Data}
+\format{A data set with 96 observations and 9 variables}
+\source{
+From Christian Kleiber and Achim Zeileis (2008). Applied
+Econometrics with R. New York: Springer-Verlag. ISBN 978-0-387-77316-2. URL
+\url{https://CRAN.R-project.org/package=AER}
+}
+\description{
+Cigarette Consumption Panel Data
+}
+\keyword{datasets}
diff --git a/man/MatchIt.url.Rd b/man/MatchIt.url.Rd
new file mode 100644
index 0000000..e492058
--- /dev/null
+++ b/man/MatchIt.url.Rd
@@ -0,0 +1,15 @@
+\name{MatchIt.url}
+
+\alias{MatchIt.url}
+
+\title{Table of links for Zelig}
+
+\description{
+  Table of links for \code{help.zelig} for the companion MatchIt package.  
+}
+
+\keyword{datasets}
+
+
+
+
diff --git a/man/Median.Rd b/man/Median.Rd
new file mode 100644
index 0000000..df0146a
--- /dev/null
+++ b/man/Median.Rd
@@ -0,0 +1,22 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/utils.R
+\name{Median}
+\alias{Median}
+\title{Compute the Statistical Median of a Vector}
+\usage{
+Median(x, na.rm = NULL)
+}
+\arguments{
+\item{x}{a vector of numeric or ordered values}
+
+\item{na.rm}{ignored}
+}
+\value{
+the median of the vector
+}
+\description{
+Compute the Statistical Median of a Vector
+}
+\author{
+Matt Owen
+}
diff --git a/man/Mode.Rd b/man/Mode.Rd
new file mode 100644
index 0000000..0a67786
--- /dev/null
+++ b/man/Mode.Rd
@@ -0,0 +1,22 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/utils.R
+\name{Mode}
+\alias{Mode}
+\alias{mode}
+\title{Compute the Statistical Mode of a Vector}
+\usage{
+Mode(x)
+}
+\arguments{
+\item{x}{a vector of numeric, factor, or ordered values}
+}
+\value{
+the statistical mode of the vector. If more than one mode exists,
+ the last one in the factor order is arbitrarily chosen (by design)
+}
+\description{
+Compute the Statistical Mode of a Vector
+}
+\author{
+Christopher Gandrud and Matt Owen
+}
diff --git a/man/PErisk.Rd b/man/PErisk.Rd
new file mode 100644
index 0000000..1655dc3
--- /dev/null
+++ b/man/PErisk.Rd
@@ -0,0 +1,76 @@
+\name{PErisk}
+
+\alias{PErisk}
+
+\title{Political Economic Risk Data from 62 Countries in 1987}
+
+\description{
+ Political Economic Risk Data from 62 Countries in 1987.
+
+}
+
+\usage{data(PErisk)}
+
+\format{ 
+	A data frame with 62 observations on the following 6 variables.
+	All data points are from 1987. See Quinn (2004) for more
+	details. 
+
+	country: a factor with levels 'Argentina' 'Australia' 'Austria'
+          'Bangladesh' 'Belgium' 'Bolivia' 'Botswana' 'Brazil' 'Burma'
+          'Cameroon' 'Canada' 'Chile' 'Colombia' 'Congo-Kinshasa'
+          'Costa Rica' 'Cote d'Ivoire' 'Denmark' 'Dominican Republic'
+          'Ecuador' 'Finland' 'Gambia, The' 'Ghana' 'Greece' 'Hungary'
+          'India' 'Indonesia' 'Iran' 'Ireland' 'Israel' 'Italy' 'Japan'
+          'Kenya' 'Korea, South' 'Malawi' 'Malaysia' 'Mexico' 'Morocco'
+          'New Zealand' 'Nigeria' 'Norway' 'Papua New Guinea'
+          'Paraguay' 'Philippines' 'Poland' 'Portugal' 'Sierra Leone'
+          'Singapore' 'South Africa' 'Spain' 'Sri Lanka' 'Sweden'
+          'Switzerland' 'Syria' 'Thailand' 'Togo' 'Tunisia' 'Turkey'
+          'United Kingdom' 'Uruguay' 'Venezuela' 'Zambia' 'Zimbabwe'
+
+     courts: an ordered factor with levels '0' < '1'.'courts' is an
+          indicator of whether the country in question is judged to
+          have an independent judiciary. From Henisz (2002).
+
+     barb2: a numeric vector giving the natural log of the black market
+          premium in each country. The black market premium is coded as
+          the black market exchange rate (local currency per dollar)
+          divided by the official exchange  rate minus 1. From
+          Marshall, Gurr, and Harff (2002). 
+
+    prsexp2: an ordered factor with levels '0' < '1' < '2' < '3' < '4'
+          < '5', giving the lack of expropriation risk. From Marshall,
+          Gurr, and Harff (2002).
+
+   prscorr2: an ordered factor with levels '0' < '1' < '2' < '3' < '4'
+          < '5', measuring the lack of corruption. From Marshall, Gurr,
+          and Harff (2002).
+
+     gdpw2: a numeric vector giving the natural log of real GDP per
+          worker in 1985 international prices. From Alvarez et al.
+          (1999).
+}
+
+\source{
+     Mike Alvarez, Jose Antonio Cheibub, Fernando Limongi, and Adam
+     Przeworski. 1999. ``ACLP Political and Economic Database.'' <URL:
+     http://www.ssc.upenn.edu/~cheibub/data/>.
+
+     Witold J. Henisz. 2002. ``The Political Constraint Index (POLCON)
+     Dataset.'' \ <URL:
+     http://www-management.wharton.upenn.edu/henisz/POLCON/ContactInfo.
+     html>.
+
+     Monty G. Marshall, Ted Robert Gurr, and Barbara Harff. 2002.
+     ``State Failure Task Force Problem Set.'' <URL:
+     http://www.cidcm.umd.edu/inscr/stfail/index.htm>.
+}
+
+\references{
+     Kevin M. Quinn. 2004. ``Bayesian Factor Analysis for Mixed Ordinal
+     and Continuous Response.'' \emph{Political Analyis}. Vol. 12, pp.338--353.
+}
+
+
+\keyword{datasets}
diff --git a/man/SupremeCourt.Rd b/man/SupremeCourt.Rd
new file mode 100644
index 0000000..9b04226
--- /dev/null
+++ b/man/SupremeCourt.Rd
@@ -0,0 +1,31 @@
+\name{SupremeCourt}
+
+\alias{SupremeCourt}
+
+\title{U.S. Supreme Court Vote Matrix}
+
+\description{
+      This dataframe contains a matrix votes cast by U.S. Supreme Court
+     justices in all cases in the 2000 term.
+}
+
+\usage{data(SupremeCourt)}
+
+\format{ The dataframe has contains data for justices Rehnquist, Stevens,
+     O'Connor, Scalia, Kennedy, Souter, Thomas, Ginsburg, and Breyer
+     for the 2000 term of the U.S. Supreme Court.  It contains data
+     from 43 non-unanimous cases. The votes are coded liberal (1) and
+     conservative (0) using the protocol of Spaeth (2003).   The unit
+     of analysis is the case citation (ANALU=0).  We are concerned with
+     formally decided cases issued with written opinions, after full
+     oral argument and cases decided by an equally divided vote
+     (DECTYPE=1,5,6,7).}
+
+\source{
+     Harold J. Spaeth (2005). ``Original United States Supreme Court
+     Database:  1953-2004 Terms.'' 
+     <URL:http://www.as.uky.edu/polisci/ulmerproject/sctdata.htm>.
+}
+
+
+\keyword{datasets}
diff --git a/man/Weimar.Rd b/man/Weimar.Rd
new file mode 100644
index 0000000..6f4d191
--- /dev/null
+++ b/man/Weimar.Rd
@@ -0,0 +1,32 @@
+\name{Weimar}
+\alias{Weimar}
+
+\title{1932 Weimar election data}
+
+\description{ This data set contains election results for 10 kreise (equivalent to precincts) from the 1932 Weimar (German) election.  
+}
+
+\usage{data(Weimar)}
+
+\format{A table containing 11 variables and 10 observations.  The variables are
+\describe{
+\item{Nazi}{Number of votes for the Nazi party}
+\item{Government}{Number of votes for the Government}
+\item{Communists}{Number of votes for the Communist party}
+\item{FarRight}{Number of votes for far right parties}
+\item{Other}{Number of votes for other parties, and non-voters}
+\item{shareunemployed}{Proportion unemployed}
+\item{shareblue}{Proportion working class}
+\item{sharewhite}{Proportion white-collar workers}
+\item{sharedomestic}{Proportion domestic servants}
+\item{shareprotestants}{Proportion Protestant}
+}
+}
+
+\source{ICPSR}
+
+%\references{
+%}
+
+\keyword{datasets}
+
diff --git a/man/Zelig-ar-class.Rd b/man/Zelig-ar-class.Rd
new file mode 100644
index 0000000..1f2d55f
--- /dev/null
+++ b/man/Zelig-ar-class.Rd
@@ -0,0 +1,89 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/model-ar.R
+\docType{class}
+\name{Zelig-ar-class}
+\alias{Zelig-ar-class}
+\alias{zar}
+\title{Time-Series Model with Autoregressive Disturbance}
+\arguments{
+\item{formula}{a symbolic representation of the model to be
+estimated, in the form \code{y ~ x1 + x2}, where \code{y} is the
+dependent variable and \code{x1} and \code{x2} are the explanatory
+variables, and \code{y}, \code{x1}, and \code{x2} are contained in the
+same dataset. (You may include more than two explanatory variables,
+of course.) The \code{+} symbol means ``inclusion'' not
+``addition.'' You may also include interaction terms and main
+effects in the form \code{x1*x2} without computing them in prior
+steps; \code{I(x1*x2)} to include only the interaction term and
+exclude the main effects; and quadratic terms in the form
+\code{I(x1^2)}.}
+
+\item{model}{the name of a statistical model to estimate.
+For a list of supported models and their documentation see:
+\url{http://docs.zeligproject.org/articles/}.}
+
+\item{data}{the name of a data frame containing the variables
+referenced in the formula or a list of multiply imputed data frames
+each having the same variable names and row numbers (created by
+\code{Amelia} or \code{\link{to_zelig_mi}}).}
+
+\item{...}{additional arguments passed to \code{zelig},
+relevant for the model to be estimated.}
+
+\item{by}{a factor variable contained in \code{data}. If supplied,
+\code{zelig} will subset
+the data frame based on the levels in the \code{by} variable, and
+estimate a model for each subset. This can save a considerable amount of
+effort. For example, to run the same model on all fifty states, you could
+use: \code{z.out <- zelig(y ~ x1 + x2, data = mydata, model = 'ls',
+by = 'state')} You may also use \code{by} to run models using MatchIt
+subclasses.}
+
+\item{cite}{If is set to 'TRUE' (default), the model citation will be printed
+to the console.}
+
+\item{ts}{The name of the variable containing the time indicator. This should be passed in as
+a string. If this variable is not provided, Zelig will assume that the data is already
+ordered by time.}
+
+\item{cs}{Name of a variable that denotes the cross-sectional element of the data, for example,
+country name in a dataset with time-series across different countries. As a variable name,
+this should be in quotes. If this is not provided, Zelig will assume that all observations
+come from the same unit over time, and should be pooled, but if provided, individual models will
+be run in each cross-section.
+If \code{cs} is given as an argument, \code{ts} must also be provided. Additionally, \code{by}
+must be \code{NULL}.}
+
+\item{order}{A vector of length 3 passed in as \code{c(p,d,q)} where p represents the order of the
+autoregressive model, d represents the number of differences taken in the model, and q represents
+the order of the moving average model.}
+}
+\value{
+Depending on the class of model selected, \code{zelig} will return
+  an object with elements including \code{coefficients}, \code{residuals},
+  and \code{formula} which may be summarized using
+  \code{summary(z.out)} or individually extracted using, for example,
+  \code{coef(z.out)}. See
+  \url{http://docs.zeligproject.org/articles/getters.html} for a list of
+  functions to extract model components. You can also extract whole fitted
+  model objects using \code{\link{from_zelig_model}}.
+}
+\description{
+Warning: \code{summary} does not work with timeseries models after
+simulation.
+}
+\details{
+Currently only the Reference class syntax for time series. This model does not accept
+Bootstraps or weights.
+}
+
+\examples{
+data(seatshare)
+subset <- seatshare[seatshare$country == "UNITED KINGDOM",]
+ts.out <- zelig(formula = unemp ~ leftseat, model = "ar", ts = "year", data = subset)
+summary(ts.out)
+
+}
+\seealso{
+Vignette: \url{http://docs.zeligproject.org/articles/zelig_ar.html}
+}
diff --git a/man/Zelig-arima-class.Rd b/man/Zelig-arima-class.Rd
new file mode 100644
index 0000000..1434413
--- /dev/null
+++ b/man/Zelig-arima-class.Rd
@@ -0,0 +1,94 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/model-arima.R
+\docType{class}
+\name{Zelig-arima-class}
+\alias{Zelig-arima-class}
+\alias{zarima}
+\title{Autoregressive and Moving-Average Models with Integration for Time-Series Data}
+\arguments{
+\item{formula}{a symbolic representation of the model to be
+estimated, in the form \code{y ~ x1 + x2}, where \code{y} is the
+dependent variable and \code{x1} and \code{x2} are the explanatory
+variables, and \code{y}, \code{x1}, and \code{x2} are contained in the
+same dataset. (You may include more than two explanatory variables,
+of course.) The \code{+} symbol means ``inclusion'' not
+``addition.'' You may also include interaction terms and main
+effects in the form \code{x1*x2} without computing them in prior
+steps; \code{I(x1*x2)} to include only the interaction term and
+exclude the main effects; and quadratic terms in the form
+\code{I(x1^2)}.}
+
+\item{model}{the name of a statistical model to estimate.
+For a list of supported models and their documentation see:
+\url{http://docs.zeligproject.org/articles/}.}
+
+\item{data}{the name of a data frame containing the variables
+referenced in the formula or a list of multiply imputed data frames
+each having the same variable names and row numbers (created by
+\code{Amelia} or \code{\link{to_zelig_mi}}).}
+
+\item{...}{additional arguments passed to \code{zelig},
+relevant for the model to be estimated.}
+
+\item{by}{a factor variable contained in \code{data}. If supplied,
+\code{zelig} will subset
+the data frame based on the levels in the \code{by} variable, and
+estimate a model for each subset. This can save a considerable amount of
+effort. For example, to run the same model on all fifty states, you could
+use: \code{z.out <- zelig(y ~ x1 + x2, data = mydata, model = 'ls',
+by = 'state')} You may also use \code{by} to run models using MatchIt
+subclasses.}
+
+\item{cite}{If is set to 'TRUE' (default), the model citation will be printed
+to the console.}
+
+\item{ts}{The name of the variable containing the time indicator. This should be passed in as
+a string. If this variable is not provided, Zelig will assume that the data is already
+ordered by time.}
+
+\item{cs}{Name of a variable that denotes the cross-sectional element of the data, for example,
+country name in a dataset with time-series across different countries. As a variable name,
+this should be in quotes. If this is not provided, Zelig will assume that all observations
+come from the same unit over time, and should be pooled, but if provided, individual models will
+be run in each cross-section.
+If \code{cs} is given as an argument, \code{ts} must also be provided. Additionally, \code{by}
+must be \code{NULL}.}
+
+\item{order}{A vector of length 3 passed in as \code{c(p,d,q)} where p represents the order of the
+autoregressive model, d represents the number of differences taken in the model, and q represents
+the order of the moving average model.}
+}
+\value{
+Depending on the class of model selected, \code{zelig} will return
+  an object with elements including \code{coefficients}, \code{residuals},
+  and \code{formula} which may be summarized using
+  \code{summary(z.out)} or individually extracted using, for example,
+  \code{coef(z.out)}. See
+  \url{http://docs.zeligproject.org/articles/getters.html} for a list of
+  functions to extract model components. You can also extract whole fitted
+  model objects using \code{\link{from_zelig_model}}.
+}
+\description{
+Warning: \code{summary} does not work with timeseries models after
+simulation.
+}
+\details{
+Currently only the Reference class syntax for time series. This model does not accept
+Bootstraps or weights.
+}
+
+\examples{
+data(seatshare)
+subset <- seatshare[seatshare$country == "UNITED KINGDOM",]
+ts.out <- zarima$new()
+ts.out$zelig(unemp ~ leftseat, order = c(1, 0, 1), data = subset)
+
+# Set fitted values and simulate quantities of interest
+ts.out$setx(leftseat = 0.75)
+ts.out$setx1(leftseat = 0.25)
+ts.out$sim()
+
+}
+\seealso{
+Vignette: \url{http://docs.zeligproject.org/articles/zelig_arima.html}
+}
diff --git a/man/Zelig-bayes-class.Rd b/man/Zelig-bayes-class.Rd
new file mode 100644
index 0000000..512add0
--- /dev/null
+++ b/man/Zelig-bayes-class.Rd
@@ -0,0 +1,19 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/model-bayes.R
+\docType{class}
+\name{Zelig-bayes-class}
+\alias{Zelig-bayes-class}
+\alias{zbayes}
+\title{Bayes Model object for inheritance across models in Zelig}
+\description{
+Bayes Model object for inheritance across models in Zelig
+}
+\section{Methods}{
+
+\describe{
+\item{\code{get_coef(nonlist = FALSE)}}{Get estimated model coefficients}
+
+\item{\code{zelig(formula, data, model = NULL, ..., weights = NULL, by,
+  bootstrap = FALSE)}}{The zelig function estimates a variety of statistical models}
+}}
+
diff --git a/man/Zelig-bbinchoice-class.Rd b/man/Zelig-bbinchoice-class.Rd
deleted file mode 100644
index a6b7029..0000000
--- a/man/Zelig-bbinchoice-class.Rd
+++ /dev/null
@@ -1,17 +0,0 @@
-% Generated by roxygen2: do not edit by hand
-% Please edit documentation in R/model-bbinchoice.R
-\docType{class}
-\name{Zelig-bbinchoice-class}
-\alias{Zelig-bbinchoice-class}
-\alias{zbbinchoice}
-\title{Bivariate Binary Choice object for inheritance across models in ZeligChoice}
-\description{
-Bivariate Binary Choice object for inheritance across models in ZeligChoice
-}
-\section{Methods}{
-
-\describe{
-\item{\code{zelig(formula, data, model = NULL, ..., weights = NULL, by,
-  bootstrap = FALSE)}}{The zelig function estimates a variety of statistical models}
-}}
-
diff --git a/man/Zelig-binchoice-class.Rd b/man/Zelig-binchoice-class.Rd
new file mode 100644
index 0000000..61f624e
--- /dev/null
+++ b/man/Zelig-binchoice-class.Rd
@@ -0,0 +1,11 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/model-binchoice.R
+\docType{class}
+\name{Zelig-binchoice-class}
+\alias{Zelig-binchoice-class}
+\alias{zbinchoice}
+\title{Binary Choice object for inheritance across models in Zelig}
+\description{
+Binary Choice object for inheritance across models in Zelig
+}
+
diff --git a/man/Zelig-binchoice-gee-class.Rd b/man/Zelig-binchoice-gee-class.Rd
new file mode 100644
index 0000000..ed5909f
--- /dev/null
+++ b/man/Zelig-binchoice-gee-class.Rd
@@ -0,0 +1,13 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/model-binchoice-gee.R
+\docType{class}
+\name{Zelig-binchoice-gee-class}
+\alias{Zelig-binchoice-gee-class}
+\alias{zbinchoicegee}
+\title{Object for Binary Choice outcomes in Generalized Estimating Equations 
+for inheritance across models in Zelig}
+\description{
+Object for Binary Choice outcomes in Generalized Estimating Equations 
+for inheritance across models in Zelig
+}
+
diff --git a/man/Zelig-binchoice-survey-class.Rd b/man/Zelig-binchoice-survey-class.Rd
new file mode 100644
index 0000000..d59164f
--- /dev/null
+++ b/man/Zelig-binchoice-survey-class.Rd
@@ -0,0 +1,13 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/model-binchoice-survey.R
+\docType{class}
+\name{Zelig-binchoice-survey-class}
+\alias{Zelig-binchoice-survey-class}
+\alias{zbinchoicesurvey}
+\title{Object for Binary Choice outcomes with Survey Weights
+for inheritance across models in Zelig}
+\description{
+Object for Binary Choice outcomes with Survey Weights
+for inheritance across models in Zelig
+}
+
diff --git a/man/Zelig-blogit-class.Rd b/man/Zelig-blogit-class.Rd
deleted file mode 100644
index a2914e8..0000000
--- a/man/Zelig-blogit-class.Rd
+++ /dev/null
@@ -1,11 +0,0 @@
-% Generated by roxygen2: do not edit by hand
-% Please edit documentation in R/model-blogit.R
-\docType{class}
-\name{Zelig-blogit-class}
-\alias{Zelig-blogit-class}
-\alias{zblogit}
-\title{Bivariate Logistic Regression for Two Dichotomous Dependent Variables}
-\description{
-Vignette: \url{http://docs.zeligproject.org/articles/zeligchoice_blogit.html}
-}
-
diff --git a/man/Zelig-bprobit-class.Rd b/man/Zelig-bprobit-class.Rd
deleted file mode 100644
index 763623c..0000000
--- a/man/Zelig-bprobit-class.Rd
+++ /dev/null
@@ -1,11 +0,0 @@
-% Generated by roxygen2: do not edit by hand
-% Please edit documentation in R/model-bprobit.R
-\docType{class}
-\name{Zelig-bprobit-class}
-\alias{Zelig-bprobit-class}
-\alias{zbprobit}
-\title{Bivariate Probit Regression for Two Dichotomous Dependent Variables}
-\description{
-Vignette: \url{http://docs.zeligproject.org/articles/zeligchoice_bprobit.html}
-}
-
diff --git a/man/Zelig-class.Rd b/man/Zelig-class.Rd
new file mode 100644
index 0000000..9fba192
--- /dev/null
+++ b/man/Zelig-class.Rd
@@ -0,0 +1,155 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/model-zelig.R
+\docType{class}
+\name{Zelig-class}
+\alias{Zelig-class}
+\alias{z}
+\title{Zelig reference class}
+\description{
+Zelig website: \url{https://zeligproject.org/}
+}
+\section{Fields}{
+
+\describe{
+\item{\code{fn}}{R function to call to wrap}
+
+\item{\code{formula}}{Zelig formula}
+
+\item{\code{weights}}{[forthcoming]}
+
+\item{\code{name}}{name of the Zelig model}
+
+\item{\code{data}}{data frame or matrix}
+
+\item{\code{by}}{split the data by factors}
+
+\item{\code{mi}}{work with imputed dataset}
+
+\item{\code{idx}}{model index}
+
+\item{\code{zelig.call}}{Zelig function call}
+
+\item{\code{model.call}}{wrapped function call}
+
+\item{\code{zelig.out}}{estimated zelig model(s)}
+
+\item{\code{setx.out}}{set values}
+
+\item{\code{setx.labels}}{pretty-print qi}
+
+\item{\code{bsetx}}{is x set?}
+
+\item{\code{bsetx1}}{is x1 set?}
+
+\item{\code{bsetrange}}{is range set?}
+
+\item{\code{bsetrange1}}{is range1 set?}
+
+\item{\code{range}}{range}
+
+\item{\code{range1}}{range1}
+
+\item{\code{test.statistics}}{list of test statistics}
+
+\item{\code{sim.out}}{simulated qi's}
+
+\item{\code{simparam}}{simulated parameters}
+
+\item{\code{num}}{number of simulations}
+
+\item{\code{authors}}{Zelig model authors}
+
+\item{\code{zeligauthors}}{Zelig authors}
+
+\item{\code{modelauthors}}{wrapped model authors}
+
+\item{\code{packageauthors}}{wrapped package authors}
+
+\item{\code{refs}}{citation information}
+
+\item{\code{year}}{model is released}
+
+\item{\code{description}}{model description}
+
+\item{\code{url}}{model URL}
+
+\item{\code{url.docs}}{model documentation URL}
+
+\item{\code{category}}{model category}
+
+\item{\code{vignette.url}}{vignette URL}
+
+\item{\code{json}}{JSON export}
+
+\item{\code{ljson}}{JSON export}
+
+\item{\code{outcome}}{JSON export}
+
+\item{\code{wrapper}}{JSON export}
+
+\item{\code{explanatory}}{JSON export}
+
+\item{\code{mcunit.test}}{unit testing}
+
+\item{\code{with.feedback}}{Feedback}
+
+\item{\code{robust.se}}{return robust standard errors}
+}}
+
+\section{Methods}{
+
+\describe{
+\item{\code{ATT(treatment, treated = 1, quietly = TRUE, num = NULL)}}{Generic Method for Computing Simulated (Sample) Average Treatment Effects on the Treated}
+
+\item{\code{cite()}}{Provide citation information about Zelig and Zelig model, and about wrapped package and wrapped model}
+
+\item{\code{feedback()}}{Send feedback to the Zelig team}
+
+\item{\code{from_zelig_model()}}{Extract the original fitted model object from a zelig call. Note only works for models using directly wrapped functions.}
+
+\item{\code{get_coef(nonlist = FALSE)}}{Get estimated model coefficients}
+
+\item{\code{get_df_residual()}}{Get residual degrees-of-freedom}
+
+\item{\code{get_fitted(...)}}{Get estimated fitted values}
+
+\item{\code{get_model_data()}}{Get data used to estimate the model}
+
+\item{\code{get_names()}}{Return Zelig object field names}
+
+\item{\code{get_predict(...)}}{Get predicted values}
+
+\item{\code{get_pvalue()}}{Get estimated model p-values}
+
+\item{\code{get_qi(qi = "ev", xvalue = "x", subset = NULL)}}{Get quantities of interest}
+
+\item{\code{get_residuals(...)}}{Get estimated model residuals}
+
+\item{\code{get_se()}}{Get estimated model standard errors}
+
+\item{\code{get_vcov()}}{Get estimated model variance-covariance matrix}
+
+\item{\code{graph(...)}}{Plot the quantities of interest}
+
+\item{\code{help()}}{Open the model vignette from https://zeligproject.org/}
+
+\item{\code{packagename()}}{Automatically retrieve wrapped package name}
+
+\item{\code{references(style = "sphinx")}}{Construct a reference list specific to a Zelig model.}
+
+\item{\code{set(..., fn = list(numeric = mean, ordered = Median))}}{Setting Explanatory Variable Values}
+
+\item{\code{sim(num = NULL)}}{Generic Method for Computing and Organizing Simulated Quantities of Interest}
+
+\item{\code{simATT(simparam, data, depvar, treatment, treated)}}{Simulate an Average Treatment on the Treated}
+
+\item{\code{summarise(...)}}{Display a Zelig object}
+
+\item{\code{summarize(...)}}{Display a Zelig object}
+
+\item{\code{toJSON()}}{Convert Zelig object to JSON format}
+
+\item{\code{zelig(formula, data, model = NULL, ..., weights = NULL, by,
+  bootstrap = FALSE)}}{The zelig function estimates a variety of statistical models}
+}}
+
diff --git a/man/Zelig-exp-class.Rd b/man/Zelig-exp-class.Rd
new file mode 100644
index 0000000..16cd77b
--- /dev/null
+++ b/man/Zelig-exp-class.Rd
@@ -0,0 +1,95 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/model-exp.R
+\docType{class}
+\name{Zelig-exp-class}
+\alias{Zelig-exp-class}
+\alias{zexp}
+\title{Exponential Regression for Duration Dependent Variables}
+\arguments{
+\item{formula}{a symbolic representation of the model to be
+estimated, in the form \code{y ~ x1 + x2}, where \code{y} is the
+dependent variable and \code{x1} and \code{x2} are the explanatory
+variables, and \code{y}, \code{x1}, and \code{x2} are contained in the
+same dataset. (You may include more than two explanatory variables,
+of course.) The \code{+} symbol means ``inclusion'' not
+``addition.'' You may also include interaction terms and main
+effects in the form \code{x1*x2} without computing them in prior
+steps; \code{I(x1*x2)} to include only the interaction term and
+exclude the main effects; and quadratic terms in the form
+\code{I(x1^2)}.}
+
+\item{model}{the name of a statistical model to estimate.
+For a list of supported models and their documentation see:
+\url{http://docs.zeligproject.org/articles/}.}
+
+\item{data}{the name of a data frame containing the variables
+referenced in the formula or a list of multiply imputed data frames
+each having the same variable names and row numbers (created by
+\code{Amelia} or \code{\link{to_zelig_mi}}).}
+
+\item{...}{additional arguments passed to \code{zelig},
+relevant for the model to be estimated.}
+
+\item{by}{a factor variable contained in \code{data}. If supplied,
+\code{zelig} will subset
+the data frame based on the levels in the \code{by} variable, and
+estimate a model for each subset. This can save a considerable amount of
+effort. For example, to run the same model on all fifty states, you could
+use: \code{z.out <- zelig(y ~ x1 + x2, data = mydata, model = 'ls',
+by = 'state')} You may also use \code{by} to run models using MatchIt
+subclasses.}
+
+\item{cite}{If is set to 'TRUE' (default), the model citation will be printed
+to the console.}
+
+\item{robust}{defaults to FALSE. If TRUE, zelig() computes robust standard errors based on sandwich estimators and the options selected in cluster.}
+
+\item{if}{robust = TRUE, you may select a variable to define groups of correlated observations. Let x3 be a variable that consists of either discrete numeric values, character strings, or factors that define strata. Then
+z.out <- zelig(y ~ x1 + x2, robust = TRUE, cluster = "x3",model = "exp", data = mydata)
+means that the observations can be correlated within the strata defined by the variable x3, and that robust standard errors should be calculated according to those clusters. If robust = TRUE but cluster is not specified, zelig() assumes that each observation falls into its own cluster.}
+}
+\value{
+Depending on the class of model selected, \code{zelig} will return
+  an object with elements including \code{coefficients}, \code{residuals},
+  and \code{formula} which may be summarized using
+  \code{summary(z.out)} or individually extracted using, for example,
+  \code{coef(z.out)}. See
+  \url{http://docs.zeligproject.org/articles/getters.html} for a list of
+  functions to extract model components. You can also extract whole fitted
+  model objects using \code{\link{from_zelig_model}}.
+}
+\description{
+Exponential Regression for Duration Dependent Variables
+}
+\details{
+Additional parameters avaialable to this model include:
+\itemize{
+  \item \code{weights}: vector of weight values or a name of a variable in the dataset
+  by which to weight the model. For more information see:
+  \url{http://docs.zeligproject.org/articles/weights.html}.
+  \item \code{bootstrap}: logical or numeric. If \code{FALSE} don't use bootstraps to
+  robustly estimate uncertainty around model parameters due to sampling error.
+  If an integer is supplied, the number of boostraps to run.
+  For more information see:
+  \url{http://docs.zeligproject.org/articles/bootstraps.html}.
+}
+}
+\section{Methods}{
+
+\describe{
+\item{\code{zelig(formula, data, model = NULL, ..., weights = NULL, by,
+  bootstrap = FALSE)}}{The zelig function estimates a variety of statistical models}
+}}
+
+\examples{
+library(Zelig)
+data(coalition)
+library(survival)
+z.out <- zelig(Surv(duration, ciep12) ~ fract + numst2, model = "exp",
+               data = coalition)
+summary(z.out)
+
+}
+\seealso{
+Vignette: \url{http://docs.zeligproject.org/articles/zelig_exp.html}
+}
diff --git a/man/Zelig-factor-bayes-class.Rd b/man/Zelig-factor-bayes-class.Rd
new file mode 100644
index 0000000..e8ac67c
--- /dev/null
+++ b/man/Zelig-factor-bayes-class.Rd
@@ -0,0 +1,155 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/model-factor-bayes.R
+\docType{class}
+\name{Zelig-factor-bayes-class}
+\alias{Zelig-factor-bayes-class}
+\alias{zfactorbayes}
+\title{Bayesian Factor Analysis}
+\arguments{
+\item{formula}{a symbolic representation of the model to be
+estimated, in the form \code{~ Y1 + Y2 + Y3}, where Y1, Y2, and Y3 are variables
+of interest in factor analysis (manifest variables), assumed to be
+normally distributed. The model requires a minimum of three manifest
+variables contained in the
+same dataset. The \code{+} symbol means ``inclusion'' not
+``addition.''}
+
+\item{factors}{number of the factors to be fitted (defaults to 2).}
+
+\item{model}{the name of a statistical model to estimate.
+For a list of other supported models and their documentation see:
+\url{http://docs.zeligproject.org/articles/}.}
+
+\item{data}{the name of a data frame containing the variables
+referenced in the formula or a list of multiply imputed data frames
+each having the same variable names and row numbers (created by
+\code{Amelia} or \code{\link{to_zelig_mi}}).}
+
+\item{...}{additional arguments passed to \code{zelig},
+relevant for the model to be estimated.}
+
+\item{by}{a factor variable contained in \code{data}. If supplied,
+\code{zelig} will subset
+the data frame based on the levels in the \code{by} variable, and
+estimate a model for each subset. This can save a considerable amount of
+effort. You may also use \code{by} to run models using MatchIt
+subclasses.}
+
+\item{cite}{If is set to 'TRUE' (default), the model citation will be printed
+to the console.}
+}
+\value{
+Depending on the class of model selected, \code{zelig} will return
+  an object with elements including \code{coefficients}, \code{residuals},
+  and \code{formula} which may be summarized using
+  \code{summary(z.out)} or individually extracted using, for example,
+  \code{coef(z.out)}. See
+  \url{http://docs.zeligproject.org/articles/getters.html} for a list of
+  functions to extract model components. You can also extract whole fitted
+  model objects using \code{\link{from_zelig_model}}.
+}
+\description{
+Bayesian Factor Analysis
+}
+\details{
+In addition, \code{zelig()} accepts the following additional arguments for model specification:
+\itemize{
+     \item \code{lambda.constraints}: list containing the equality or
+     inequality constraints on the factor loadings. Choose from one of the following forms:
+     \item \code{varname = list()}: by default, no constraints are imposed.
+     \item \code{varname = list(d, c)}: constrains the dth loading for the
+           variable named varname to be equal to c.
+     \item \code{varname = list(d, +)}: constrains the dth loading for the variable named varname to be positive;
+     \item \code{varname = list(d, -)}: constrains the dth loading for the variable named varname to be negative.
+     \item \code{std.var}: defaults to \code{FALSE} (manifest variables are rescaled to
+     zero mean, but retain observed variance). If \code{TRUE}, the manifest
+     variables are rescaled to be mean zero and unit variance.
+}
+
+In addition, \code{zelig()} accepts the following additional inputs for \code{bayes.factor}:
+\itemize{
+    \item \code{burnin}: number of the initial MCMC iterations to be discarded (defaults to 1,000).
+    \item \code{mcmc}: number of the MCMC iterations after burnin (defaults to 20,000).
+    \item \code{thin}: thinning interval for the Markov chain. Only every thin-th
+        draw from the Markov chain is kept. The value of mcmc must be divisible
+        by this value. The default value is 1.
+    \item \code{verbose}: defaults to FALSE. If TRUE, the
+    progress of the sampler (every 10%10%) is printed to the screen.
+    \item \code{seed}: seed for the random number generator. The default is NA which
+    corresponds to a random seed 12345.
+    \item \code{Lambda.start}: starting values of the factor loading matrix \eqn{\Lambda}, either a
+    scalar (all unconstrained loadings are set to that value), or a matrix with
+    compatible dimensions. The default is NA, where the start value are set to
+    be 0 for unconstrained factor loadings, and 0.5 or - 0.5 for constrained
+    factor loadings (depending on the nature of the constraints).
+    \item \code{Psi.start}: starting values for the uniquenesses, either a scalar
+    (the starting values for all diagonal elements of \eqn{\Psi} are set to be this value),
+    or a vector with length equal to the number of manifest variables. In the latter
+    case, the starting values of the diagonal elements of \eqn{\Psi} take the values of
+    Psi.start. The default value is NA where the starting values of the all the
+    uniquenesses are set to be 0.5.
+    \item \code{store.lambda}: defaults to TRUE, which stores the posterior draws of the factor loadings.
+    \item \code{store.scores}: defaults to FALSE. If TRUE, stores the posterior draws of the
+    factor scores. (Storing factor scores may take large amount of memory for a large
+    number of draws or observations.)
+}
+
+The model also accepts the following additional arguments to specify prior parameters:
+\itemize{
+    \item \code{l0}: mean of the Normal prior for the factor loadings, either a scalar or a
+    matrix with the same dimensions as \eqn{\Lambda}. If a scalar value, that value will be the
+    prior mean for all the factor loadings. Defaults to 0.
+    \item \code{L0}: precision parameter of the Normal prior for the factor loadings, either
+    a scalar or a matrix with the same dimensions as \eqn{\Lambda}. If \code{L0} takes a scalar value,
+    then the precision matrix will be a diagonal matrix with the diagonal elements
+    set to that value. The default value is 0, which leads to an improper prior.
+    \item \code{a0}: the shape parameter of the Inverse Gamma prior for the uniquenesses
+    is \code{a0}/2. It can take a scalar value or a vector. The default value is 0.001.
+    \item \code{b0}: the scale parameter of the Inverse Gamma prior for the uniquenesses
+    is \code{b0}/2. It can take a scalar value or a vector. The default value is 0.001.
+}
+
+Additional parameters avaialable to this model include:
+\itemize{
+  \item \code{weights}: vector of weight values or a name of a variable in the dataset
+  by which to weight the model. For more information see:
+  \url{http://docs.zeligproject.org/articles/weights.html}.
+}
+}
+\section{Methods}{
+
+\describe{
+\item{\code{zelig(formula, data, model = NULL, ..., weights = NULL, by,
+  bootstrap = FALSE)}}{The zelig function estimates a variety of statistical models}
+}}
+
+\examples{
+\dontrun{
+data(swiss)
+names(swiss) <- c("Fert", "Agr", "Exam", "Educ", "Cath", "InfMort")
+z.out <- zelig(~ Agr + Exam + Educ + Cath + InfMort,
+model = "factor.bayes", data = swiss,
+factors = 2, verbose = FALSE,
+a0 = 1, b0 = 0.15, burnin = 500, mcmc = 5000)
+
+z.out$geweke.diag()
+z.out <- zelig(~ Agr + Exam + Educ + Cath + InfMort,
+model = "factor.bayes", data = swiss, factors = 2,
+lambda.constraints =
+   list(Exam = list(1,"+"),
+        Exam = list(2,"-"),
+        Educ = c(2, 0),
+        InfMort = c(1, 0)),
+verbose = FALSE, a0 = 1, b0 = 0.15,
+burnin = 500, mcmc = 5000)
+
+z.out$geweke.diag()
+z.out$heidel.diag()
+z.out$raftery.diag()
+summary(z.out)
+}
+
+}
+\seealso{
+Vignette: \url{http://docs.zeligproject.org/articles/zelig_factorbayes.html}
+}
diff --git a/man/Zelig-gamma-class.Rd b/man/Zelig-gamma-class.Rd
new file mode 100644
index 0000000..31540c3
--- /dev/null
+++ b/man/Zelig-gamma-class.Rd
@@ -0,0 +1,79 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/model-gamma.R
+\docType{class}
+\name{Zelig-gamma-class}
+\alias{Zelig-gamma-class}
+\alias{zgamma}
+\title{Gamma Regression for Continuous, Positive Dependent Variables}
+\arguments{
+\item{formula}{a symbolic representation of the model to be
+estimated, in the form \code{y ~ x1 + x2}, where \code{y} is the
+dependent variable and \code{x1} and \code{x2} are the explanatory
+variables, and \code{y}, \code{x1}, and \code{x2} are contained in the
+same dataset. (You may include more than two explanatory variables,
+of course.) The \code{+} symbol means ``inclusion'' not
+``addition.'' You may also include interaction terms and main
+effects in the form \code{x1*x2} without computing them in prior
+steps; \code{I(x1*x2)} to include only the interaction term and
+exclude the main effects; and quadratic terms in the form
+\code{I(x1^2)}.}
+
+\item{model}{the name of a statistical model to estimate.
+For a list of other supported models and their documentation see:
+\url{http://docs.zeligproject.org/articles/}.}
+
+\item{data}{the name of a data frame containing the variables
+referenced in the formula or a list of multiply imputed data frames
+each having the same variable names and row numbers (created by
+\code{Amelia} or \code{\link{to_zelig_mi}}).}
+
+\item{...}{additional arguments passed to \code{zelig},
+relevant for the model to be estimated.}
+
+\item{by}{a factor variable contained in \code{data}. If supplied,
+\code{zelig} will subset
+the data frame based on the levels in the \code{by} variable, and
+estimate a model for each subset. This can save a considerable amount of
+effort. You may also use \code{by} to run models using MatchIt
+subclasses.}
+
+\item{cite}{If is set to 'TRUE' (default), the model citation will be printed
+to the console.}
+}
+\value{
+Depending on the class of model selected, \code{zelig} will return
+  an object with elements including \code{coefficients}, \code{residuals},
+  and \code{formula} which may be summarized using
+  \code{summary(z.out)} or individually extracted using, for example,
+  \code{coef(z.out)}. See
+  \url{http://docs.zeligproject.org/articles/getters.html} for a list of
+  functions to extract model components. You can also extract whole fitted
+  model objects using \code{\link{from_zelig_model}}.
+}
+\description{
+Gamma Regression for Continuous, Positive Dependent Variables
+}
+\details{
+Additional parameters avaialable to this model include:
+\itemize{
+  \item \code{weights}: vector of weight values or a name of a variable in the dataset
+  by which to weight the model. For more information see:
+  \url{http://docs.zeligproject.org/articles/weights.html}.
+  \item \code{bootstrap}: logical or numeric. If \code{FALSE} don't use bootstraps to
+  robustly estimate uncertainty around model parameters due to sampling error.
+  If an integer is supplied, the number of boostraps to run.
+  For more information see:
+  \url{http://docs.zeligproject.org/articles/bootstraps.html}.
+}
+}
+
+\examples{
+library(Zelig)
+data(coalition)
+z.out <- zelig(duration ~ fract + numst2, model = "gamma", data = coalition)
+summary(z.out)
+
+}
+\seealso{
+Vignette: \url{http://docs.zeligproject.org/articles/zelig_gamma.html}
+}
diff --git a/man/Zelig-gamma-gee-class.Rd b/man/Zelig-gamma-gee-class.Rd
new file mode 100644
index 0000000..44c0ab4
--- /dev/null
+++ b/man/Zelig-gamma-gee-class.Rd
@@ -0,0 +1,95 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/model-gamma-gee.R
+\docType{class}
+\name{Zelig-gamma-gee-class}
+\alias{Zelig-gamma-gee-class}
+\alias{zgammagee}
+\title{Generalized Estimating Equation for Gamma Regression}
+\arguments{
+\item{formula}{a symbolic representation of the model to be
+estimated, in the form \code{y ~ x1 + x2}, where \code{y} is the
+dependent variable and \code{x1} and \code{x2} are the explanatory
+variables, and \code{y}, \code{x1}, and \code{x2} are contained in the
+same dataset. (You may include more than two explanatory variables,
+of course.) The \code{+} symbol means ``inclusion'' not
+``addition.'' You may also include interaction terms and main
+effects in the form \code{x1*x2} without computing them in prior
+steps; \code{I(x1*x2)} to include only the interaction term and
+exclude the main effects; and quadratic terms in the form
+\code{I(x1^2)}.}
+
+\item{model}{the name of a statistical model to estimate.
+For a list of other supported models and their documentation see:
+\url{http://docs.zeligproject.org/articles/}.}
+
+\item{data}{the name of a data frame containing the variables
+referenced in the formula or a list of multiply imputed data frames
+each having the same variable names and row numbers (created by
+\code{Amelia} or \code{\link{to_zelig_mi}}).}
+
+\item{...}{additional arguments passed to \code{zelig},
+relevant for the model to be estimated.}
+
+\item{by}{a factor variable contained in \code{data}. If supplied,
+\code{zelig} will subset
+the data frame based on the levels in the \code{by} variable, and
+estimate a model for each subset. This can save a considerable amount of
+effort. You may also use \code{by} to run models using MatchIt
+subclasses.}
+
+\item{cite}{If is set to 'TRUE' (default), the model citation will be printed
+to the console.}
+
+\item{corstr:character}{string specifying the correlation structure: "independence",
+"exchangeable", "ar1", "unstructured" and "userdefined"}
+
+\item{See}{geeglm in package geepack for other function arguments.}
+
+\item{id:}{where id is a variable which identifies the clusters. The data should be sorted
+by id and should be ordered within each cluster when appropriate}
+
+\item{corstr:}{character string specifying the correlation structure: "independence",
+"exchangeable", "ar1", "unstructured" and "userdefined"}
+
+\item{geeglm:}{See geeglm in package geepack for other function arguments}
+}
+\value{
+Depending on the class of model selected, \code{zelig} will return
+  an object with elements including \code{coefficients}, \code{residuals},
+  and \code{formula} which may be summarized using
+  \code{summary(z.out)} or individually extracted using, for example,
+  \code{coef(z.out)}. See
+  \url{http://docs.zeligproject.org/articles/getters.html} for a list of
+  functions to extract model components. You can also extract whole fitted
+  model objects using \code{\link{from_zelig_model}}.
+}
+\description{
+Generalized Estimating Equation for Gamma Regression
+}
+\details{
+Additional parameters avaialable to this model include:
+\itemize{
+  \item \code{weights}: vector of weight values or a name of a variable in the dataset
+  by which to weight the model. For more information see:
+  \url{http://docs.zeligproject.org/articles/weights.html}.
+  \item \code{bootstrap}: logical or numeric. If \code{FALSE} don't use bootstraps to
+  robustly estimate uncertainty around model parameters due to sampling error.
+  If an integer is supplied, the number of boostraps to run.
+  For more information see:
+  \url{http://docs.zeligproject.org/articles/bootstraps.html}.
+}
+}
+
+\examples{
+library(Zelig)
+data(coalition)
+coalition$cluster <- c(rep(c(1:62), 5),rep(c(63), 4))
+sorted.coalition <- coalition[order(coalition$cluster),]
+z.out <- zelig(duration ~ fract + numst2, model = "gamma.gee",id = "cluster",
+               data = sorted.coalition,corstr = "exchangeable")
+summary(z.out)
+
+}
+\seealso{
+Vignette: \url{http://docs.zeligproject.org/articles/zelig_gammagee.html}
+}
diff --git a/man/Zelig-gamma-survey-class.Rd b/man/Zelig-gamma-survey-class.Rd
new file mode 100644
index 0000000..d6f6c7f
--- /dev/null
+++ b/man/Zelig-gamma-survey-class.Rd
@@ -0,0 +1,80 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/model-gamma-survey.R
+\docType{class}
+\name{Zelig-gamma-survey-class}
+\alias{Zelig-gamma-survey-class}
+\alias{zgammasurvey}
+\title{Gamma Regression with Survey Weights}
+\arguments{
+\item{formula}{a symbolic representation of the model to be
+estimated, in the form \code{y ~ x1 + x2}, where \code{y} is the
+dependent variable and \code{x1} and \code{x2} are the explanatory
+variables, and \code{y}, \code{x1}, and \code{x2} are contained in the
+same dataset. (You may include more than two explanatory variables,
+of course.) The \code{+} symbol means ``inclusion'' not
+``addition.'' You may also include interaction terms and main
+effects in the form \code{x1*x2} without computing them in prior
+steps; \code{I(x1*x2)} to include only the interaction term and
+exclude the main effects; and quadratic terms in the form
+\code{I(x1^2)}.}
+
+\item{model}{the name of a statistical model to estimate.
+For a list of other supported models and their documentation see:
+\url{http://docs.zeligproject.org/articles/}.}
+
+\item{data}{the name of a data frame containing the variables
+referenced in the formula or a list of multiply imputed data frames
+each having the same variable names and row numbers (created by
+\code{Amelia} or \code{\link{to_zelig_mi}}).}
+
+\item{...}{additional arguments passed to \code{zelig},
+relevant for the model to be estimated.}
+
+\item{by}{a factor variable contained in \code{data}. If supplied,
+\code{zelig} will subset
+the data frame based on the levels in the \code{by} variable, and
+estimate a model for each subset. This can save a considerable amount of
+effort. You may also use \code{by} to run models using MatchIt
+subclasses.}
+
+\item{cite}{If is set to 'TRUE' (default), the model citation will be printed
+to the console.}
+}
+\value{
+Depending on the class of model selected, \code{zelig} will return
+  an object with elements including \code{coefficients}, \code{residuals},
+  and \code{formula} which may be summarized using
+  \code{summary(z.out)} or individually extracted using, for example,
+  \code{coef(z.out)}. See
+  \url{http://docs.zeligproject.org/articles/getters.html} for a list of
+  functions to extract model components. You can also extract whole fitted
+  model objects using \code{\link{from_zelig_model}}.
+}
+\description{
+Gamma Regression with Survey Weights
+}
+\details{
+Additional parameters avaialable to this model include:
+\itemize{
+  \item \code{weights}: vector of weight values or a name of a variable in the dataset
+  by which to weight the model. For more information see:
+  \url{http://docs.zeligproject.org/articles/weights.html}.
+  \item \code{bootstrap}: logical or numeric. If \code{FALSE} don't use bootstraps to
+  robustly estimate uncertainty around model parameters due to sampling error.
+  If an integer is supplied, the number of boostraps to run.
+  For more information see:
+  \url{http://docs.zeligproject.org/articles/bootstraps.html}.
+}
+}
+
+\examples{
+library(Zelig)
+data(api, package="survey")
+z.out1 <- zelig(api00 ~ meals + yr.rnd, model = "gamma.survey",
+weights = ~pw, data = apistrat)
+summary(z.out1)
+
+}
+\seealso{
+Vignette: \url{http://docs.zeligproject.org/articles/zelig_gammasurvey.html}
+}
diff --git a/man/Zelig-gee-class.Rd b/man/Zelig-gee-class.Rd
new file mode 100644
index 0000000..b4b4780
--- /dev/null
+++ b/man/Zelig-gee-class.Rd
@@ -0,0 +1,17 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/model-gee.R
+\docType{class}
+\name{Zelig-gee-class}
+\alias{Zelig-gee-class}
+\alias{zgee}
+\title{Generalized Estimating Equations Model object for inheritance across models in Zelig}
+\description{
+Generalized Estimating Equations Model object for inheritance across models in Zelig
+}
+\section{Methods}{
+
+\describe{
+\item{\code{zelig(formula, data, model = NULL, ..., weights = NULL, by,
+  bootstrap = FALSE)}}{The zelig function estimates a variety of statistical models}
+}}
+
diff --git a/man/Zelig-glm-class.Rd b/man/Zelig-glm-class.Rd
new file mode 100644
index 0000000..c65d718
--- /dev/null
+++ b/man/Zelig-glm-class.Rd
@@ -0,0 +1,17 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/model-glm.R
+\docType{class}
+\name{Zelig-glm-class}
+\alias{Zelig-glm-class}
+\alias{zglm}
+\title{Generalized Linear Model object for inheritance across models in Zelig}
+\description{
+Generalized Linear Model object for inheritance across models in Zelig
+}
+\section{Methods}{
+
+\describe{
+\item{\code{zelig(formula, data, model = NULL, ..., weights = NULL, by,
+  bootstrap = FALSE)}}{The zelig function estimates a variety of statistical models}
+}}
+
diff --git a/man/Zelig-ivreg-class.Rd b/man/Zelig-ivreg-class.Rd
new file mode 100644
index 0000000..d807662
--- /dev/null
+++ b/man/Zelig-ivreg-class.Rd
@@ -0,0 +1,150 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/model-ivreg.R
+\docType{class}
+\name{Zelig-ivreg-class}
+\alias{Zelig-ivreg-class}
+\alias{zivreg}
+\title{Instrumental-Variable Regression}
+\source{
+\code{ivreg} is from Christian Kleiber and Achim Zeileis (2008). Applied
+Econometrics with R. New York: Springer-Verlag. ISBN 978-0-387-77316-2. URL
+\url{https://CRAN.R-project.org/package=AER}
+}
+\arguments{
+\item{formula}{a symbolic representation of the model to be
+estimated, in the form \code{y ~ x1 + x2}, where \code{y} is the
+dependent variable and \code{x1} and \code{x2} are the explanatory
+variables, and \code{y}, \code{x1}, and \code{x2} are contained in the
+same dataset. (You may include more than two explanatory variables,
+of course.) The \code{+} symbol means \code{inclusion'' not}addition.'' You may also include interaction terms and main
+effects in the form \code{x1*x2} without computing them in prior
+steps; \code{I(x1*x2)} to include only the interaction term and
+exclude the main effects; and quadratic terms in the form
+\code{I(x1^2)}.}
+
+\item{model}{the name of a statistical model to estimate.
+For a list of other supported models and their documentation see:
+\url{http://docs.zeligproject.org/articles/}.}
+
+\item{data}{the name of a data frame containing the variables
+referenced in the formula or a list of multiply imputed data frames
+each having the same variable names and row numbers (created by
+\code{Amelia} or \code{\link{to_zelig_mi}}).}
+
+\item{...}{additional arguments passed to \code{zelig},
+relevant for the model to be estimated.}
+
+\item{by}{a factor variable contained in \code{data}. If supplied,
+\code{zelig} will subset
+the data frame based on the levels in the \code{by} variable, and
+estimate a model for each subset. This can save a considerable amount of
+effort. You may also use \code{by} to run models using MatchIt
+subclasses.}
+
+\item{cite}{If is set to 'TRUE' (default), the model citation will be printed
+to the console.}
+
+\item{formula}{specification(s) of the regression relationship}
+
+\item{instruments}{the instruments. Either \code{instruments} is missing and
+formula has three parts as in \code{y ~ x1 + x2 | z1 + z2 + z3} (recommended) or
+formula is \code{y ~ x1 + x2} and instruments is a one-sided formula
+\code{~ z1 + z2 + z3}. Using \code{instruments} is not recommended with \code{zelig}.}
+
+\item{model, x, y}{logicals. If \code{TRUE} the corresponding components of the fit
+(the model frame, the model matrices , the response) are returned.}
+
+\item{...}{further arguments passed to methods. See also \code{\link{zelig}}.}
+}
+\value{
+Depending on the class of model selected, \code{zelig} will return
+an object with elements including \code{coefficients}, \code{residuals},
+and \code{formula} which may be summarized using
+\code{summary(z.out)} or individually extracted using, for example,
+\code{coef(z.out)}. See
+\url{http://docs.zeligproject.org/articles/getters.html} for a list of
+functions to extract model components. You can also extract whole fitted
+model objects using \code{\link{from_zelig_model}}.
+}
+\description{
+Instrumental-Variable Regression
+}
+\details{
+Additional parameters avaialable to many models include:
+\itemize{
+\item weights: vector of weight values or a name of a variable in the dataset
+by which to weight the model. For more information see:
+\url{http://docs.zeligproject.org/articles/weights.html}.
+\item bootstrap: logical or numeric. If \code{FALSE} don't use bootstraps to
+robustly estimate uncertainty around model parameters due to sampling error.
+If an integer is supplied, the number of boostraps to run.
+For more information see:
+\url{http://docs.zeligproject.org/articles/bootstraps.html}.
+}
+
+Regressors and instruments for \code{ivreg} are most easily specified in
+a formula with two parts on the right-hand side, e.g.,
+\code{y ~ x1 + x2 | z1 + z2 + z3}, where \code{x1} and \code{x2} are the regressors and
+\code{z1}, \code{z2}, and \code{z3} are the instruments. Note that exogenous regressors
+have to be included as instruments for themselves. For example, if there is
+one exogenous regressor \code{ex} and one endogenous regressor \code{en} with
+instrument \code{in}, the appropriate formula would be \code{y ~ ex + en | ex + in}.
+Equivalently, this can be specified as \code{y ~ ex + en | . - en + in}, i.e.,
+by providing an update formula with a \code{.} in the second part of the
+formula. The latter is typically more convenient, if there is a large
+number of exogenous regressors.
+}
+\section{Methods}{
+
+\describe{
+\item{\code{zelig(formula, data, model = NULL, ..., weights = NULL, by,
+  bootstrap = FALSE)}}{The zelig function estimates a variety of statistical models}
+}}
+
+\examples{
+library(Zelig)
+library(dplyr) # for the pipe operator \%>\%
+# load and transform data
+data("CigarettesSW")
+CigarettesSW$rprice <- with(CigarettesSW, price/cpi)
+CigarettesSW$rincome <- with(CigarettesSW, income/population/cpi)
+CigarettesSW$tdiff <- with(CigarettesSW, (taxs - tax)/cpi)
+# log second stage independent variables, as logging internally for ivreg is
+# not currently supported
+CigarettesSW$log_rprice <- log(CigarettesSW$rprice)
+CigarettesSW$log_rincome <- log(CigarettesSW$rincome)
+z.out1 <- zelig(log(packs) ~ log_rprice + log_rincome |
+log_rincome + tdiff + I(tax/cpi),data = CigarettesSW, subset = year == "1995",model = "ivreg")
+summary(z.out1)
+library(Zelig)
+library(AER) # for sandwich vcov
+library(dplyr) # for the pipe operator \%>\%
+
+# load and transform data
+data("CigarettesSW")
+CigarettesSW$rprice <- with(CigarettesSW, price/cpi)
+CigarettesSW$rincome <- with(CigarettesSW, income/population/cpi)
+CigarettesSW$tdiff <- with(CigarettesSW, (taxs - tax)/cpi)
+
+# log second stage independent variables, as logging internally for ivreg is
+# not currently supported
+CigarettesSW$log_rprice <- log(CigarettesSW$rprice)
+CigarettesSW$log_rincome <- log(CigarettesSW$rincome)
+
+# estimate model
+z.out1 <- zelig(log(packs) ~ log_rprice + log_rincome |
+                    log_rincome + tdiff + I(tax/cpi),
+                    data = CigarettesSW,
+                    model = "ivreg")
+summary(z.out1)
+
+}
+\seealso{
+Vignette: \url{http://docs.zeligproject.org/articles/zelig_ivreg.html}
+Fit instrumental-variable regression by two-stage least squares. This is
+equivalent to direct instrumental-variables estimation when the number of
+instruments is equal to the number of predictors.
+
+\code{\link{zelig}},
+Greene, W. H. (1993) \emph{Econometric Analysis}, 2nd ed., Macmillan.
+}
diff --git a/man/Zelig-logit-bayes-class.Rd b/man/Zelig-logit-bayes-class.Rd
new file mode 100644
index 0000000..ad0a1f9
--- /dev/null
+++ b/man/Zelig-logit-bayes-class.Rd
@@ -0,0 +1,100 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/model-logit-bayes.R
+\docType{class}
+\name{Zelig-logit-bayes-class}
+\alias{Zelig-logit-bayes-class}
+\alias{zlogitbayes}
+\title{Bayesian Logit Regression}
+\arguments{
+\item{formula}{a symbolic representation of the model to be
+estimated, in the form \code{y ~ x1 + x2}, where \code{y} is the
+dependent variable and \code{x1} and \code{x2} are the explanatory
+variables, and \code{y}, \code{x1}, and \code{x2} are contained in the
+same dataset. (You may include more than two explanatory variables,
+of course.) The \code{+} symbol means ``inclusion'' not
+``addition.'' You may also include interaction terms and main
+effects in the form \code{x1*x2} without computing them in prior
+steps; \code{I(x1*x2)} to include only the interaction term and
+exclude the main effects; and quadratic terms in the form
+\code{I(x1^2)}.}
+
+\item{model}{the name of a statistical model to estimate.
+For a list of other supported models and their documentation see:
+\url{http://docs.zeligproject.org/articles/}.}
+
+\item{data}{the name of a data frame containing the variables
+referenced in the formula or a list of multiply imputed data frames
+each having the same variable names and row numbers (created by
+\code{Amelia} or \code{\link{to_zelig_mi}}).}
+
+\item{...}{additional arguments passed to \code{zelig},
+relevant for the model to be estimated.}
+
+\item{by}{a factor variable contained in \code{data}. If supplied,
+\code{zelig} will subset
+the data frame based on the levels in the \code{by} variable, and
+estimate a model for each subset. This can save a considerable amount of
+effort. You may also use \code{by} to run models using MatchIt
+subclasses.}
+
+\item{cite}{If is set to 'TRUE' (default), the model citation will be printed
+to the console.}
+}
+\value{
+Depending on the class of model selected, \code{zelig} will return
+  an object with elements including \code{coefficients}, \code{residuals},
+  and \code{formula} which may be summarized using
+  \code{summary(z.out)} or individually extracted using, for example,
+  \code{coef(z.out)}. See
+  \url{http://docs.zeligproject.org/articles/getters.html} for a list of
+  functions to extract model components. You can also extract whole fitted
+  model objects using \code{\link{from_zelig_model}}.
+}
+\description{
+Bayesian Logit Regression
+}
+\details{
+Additional parameters avaialable to this model include:
+\itemize{
+  \item \code{weights}: vector of weight values or a name of a variable in the dataset
+  by which to weight the model. For more information see:
+  \url{http://docs.zeligproject.org/articles/weights.html}.
+  \item \code{burnin}: number of the initial MCMC iterations to be discarded (defaults to 1,000).
+  \item \code{mcmc}: number of the MCMC iterations after burnin (defaults to 10,000).
+  \item \code{thin}: thinning interval for the Markov chain. Only every thin-th draw from
+  the Markov chain is kept. The value of mcmc must be divisible by this value. The default
+  value is 1.
+  \item \code{verbose}: defaults to FALSE. If TRUE, the progress of the sampler (every 10\%)
+  is printed to the screen.
+  \item \code{seed}: seed for the random number generator. The default is \code{NA} which
+  corresponds to a random seed of 12345.
+  \item \code{beta.start}: starting values for the Markov chain, either a scalar or vector
+  with length equal to the number of estimated coefficients. The default is \code{NA}, such
+  that the maximum likelihood estimates are used as the starting values.
+}
+Use the following parameters to specify the model's priors:
+\itemize{
+    \item \code{b0}: prior mean for the coefficients, either a numeric vector or a
+    scalar. If a scalar value, that value will be the prior mean for all the
+    coefficients. The default is 0.
+    \item \code{B0}: prior precision parameter for the coefficients, either a
+    square matrix (with the dimensions equal to the number of the coefficients) or
+    a scalar. If a scalar value, that value times an identity matrix will be the
+    prior precision parameter. The default is 0, which leads to an improper prior.
+}
+Use the following arguments to specify optional output for the model:
+\itemize{
+    \item \code{bayes.resid}: defaults to FALSE. If TRUE, the latent
+    Bayesian residuals for all observations are returned. Alternatively,
+    users can specify a vector of observations for which the latent residuals should be returned.
+}
+}
+
+\examples{
+data(turnout)
+z.out <- zelig(vote ~ race + educate, model = "logit.bayes",data = turnout, verbose = FALSE)
+
+}
+\seealso{
+Vignette: \url{http://docs.zeligproject.org/articles/zelig_logitbayes.html}
+}
diff --git a/man/Zelig-logit-class.Rd b/man/Zelig-logit-class.Rd
new file mode 100644
index 0000000..41cbef9
--- /dev/null
+++ b/man/Zelig-logit-class.Rd
@@ -0,0 +1,98 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/model-logit.R
+\docType{class}
+\name{Zelig-logit-class}
+\alias{Zelig-logit-class}
+\alias{zlogit}
+\title{Logistic Regression for Dichotomous Dependent Variables}
+\arguments{
+\item{formula}{a symbolic representation of the model to be
+estimated, in the form \code{y ~ x1 + x2}, where \code{y} is the
+dependent variable and \code{x1} and \code{x2} are the explanatory
+variables, and \code{y}, \code{x1}, and \code{x2} are contained in the
+same dataset. (You may include more than two explanatory variables,
+of course.) The \code{+} symbol means ``inclusion'' not
+``addition.'' You may also include interaction terms and main
+effects in the form \code{x1*x2} without computing them in prior
+steps; \code{I(x1*x2)} to include only the interaction term and
+exclude the main effects; and quadratic terms in the form
+\code{I(x1^2)}.}
+
+\item{model}{the name of a statistical model to estimate.
+For a list of other supported models and their documentation see:
+\url{http://docs.zeligproject.org/articles/}.}
+
+\item{data}{the name of a data frame containing the variables
+referenced in the formula or a list of multiply imputed data frames
+each having the same variable names and row numbers (created by
+\code{Amelia} or \code{\link{to_zelig_mi}}).}
+
+\item{...}{additional arguments passed to \code{zelig},
+relevant for the model to be estimated.}
+
+\item{by}{a factor variable contained in \code{data}. If supplied,
+\code{zelig} will subset
+the data frame based on the levels in the \code{by} variable, and
+estimate a model for each subset. This can save a considerable amount of
+effort. You may also use \code{by} to run models using MatchIt
+subclasses.}
+
+\item{cite}{If is set to 'TRUE' (default), the model citation will be printed
+to the console.}
+
+\item{below}{(defaults to 0) The point at which the dependent variable is censored from below. If any values in the dependent variable are observed to be less than the censoring point, it is assumed that that particular observation is censored from below at the observed value. (See for a Bayesian implementation that supports both left and right censoring.)}
+
+\item{robust}{defaults to FALSE. If TRUE, zelig() computes robust standard errors based on sandwich estimators (see and ) and the options selected in cluster.}
+
+\item{if}{robust = TRUE, you may select a variable to define groups of correlated observations. Let x3 be a variable that consists of either discrete numeric values, character strings, or factors that define strata. Then
+z.out <- zelig(y ~ x1 + x2, robust = TRUE, cluster = "x3", model = "tobit", data = mydata)
+means that the observations can be correlated within the strata defined by the variable x3, and that robust standard errors should be calculated according to those clusters. If robust = TRUE but cluster is not specified, zelig() assumes that each observation falls into its own cluster.}
+}
+\value{
+Depending on the class of model selected, \code{zelig} will return
+  an object with elements including \code{coefficients}, \code{residuals},
+  and \code{formula} which may be summarized using
+  \code{summary(z.out)} or individually extracted using, for example,
+  \code{coef(z.out)}. See
+  \url{http://docs.zeligproject.org/articles/getters.html} for a list of
+  functions to extract model components. You can also extract whole fitted
+  model objects using \code{\link{from_zelig_model}}.
+}
+\description{
+Logistic Regression for Dichotomous Dependent Variables
+}
+\details{
+Additional parameters avaialable to this model include:
+\itemize{
+  \item weights: vector of weight values or a name of a variable in the dataset
+  by which to weight the model. For more information see:
+  \url{http://docs.zeligproject.org/articles/weights.html}.
+  \item bootstrap: logical or numeric. If \code{FALSE} don't use bootstraps to
+  robustly estimate uncertainty around model parameters due to sampling error.
+  If an integer is supplied, the number of boostraps to run.
+  For more information see:
+  \url{http://docs.zeligproject.org/articles/bootstraps.html}.
+}
+}
+\section{Methods}{
+
+\describe{
+\item{\code{show(signif.stars = FALSE, subset = NULL, bagging = FALSE)}}{Display a Zelig object}
+}}
+
+\examples{
+library(Zelig)
+data(turnout)
+z.out1 <- zelig(vote ~ age + race, model = "logit", data = turnout,
+                cite = FALSE)
+summary(z.out1)
+summary(z.out1, odds_ratios = TRUE)
+x.out1 <- setx(z.out1, age = 36, race = "white")
+s.out1 <- sim(z.out1, x = x.out1)
+summary(s.out1)
+plot(s.out1)
+
+}
+\seealso{
+Vignette: \url{http://docs.zeligproject.org/articles/zelig_logit.html}
+}
diff --git a/man/Zelig-logit-gee-class.Rd b/man/Zelig-logit-gee-class.Rd
new file mode 100644
index 0000000..1f7e1af
--- /dev/null
+++ b/man/Zelig-logit-gee-class.Rd
@@ -0,0 +1,96 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/model-logit-gee.R
+\docType{class}
+\name{Zelig-logit-gee-class}
+\alias{Zelig-logit-gee-class}
+\alias{zlogitgee}
+\title{Generalized Estimating Equation for Logit Regression}
+\arguments{
+\item{formula}{a symbolic representation of the model to be
+estimated, in the form \code{y ~ x1 + x2}, where \code{y} is the
+dependent variable and \code{x1} and \code{x2} are the explanatory
+variables, and \code{y}, \code{x1}, and \code{x2} are contained in the
+same dataset. (You may include more than two explanatory variables,
+of course.) The \code{+} symbol means ``inclusion'' not
+``addition.'' You may also include interaction terms and main
+effects in the form \code{x1*x2} without computing them in prior
+steps; \code{I(x1*x2)} to include only the interaction term and
+exclude the main effects; and quadratic terms in the form
+\code{I(x1^2)}.}
+
+\item{model}{the name of a statistical model to estimate.
+For a list of other supported models and their documentation see:
+\url{http://docs.zeligproject.org/articles/}.}
+
+\item{data}{the name of a data frame containing the variables
+referenced in the formula or a list of multiply imputed data frames
+each having the same variable names and row numbers (created by
+\code{Amelia} or \code{\link{to_zelig_mi}}).}
+
+\item{...}{additional arguments passed to \code{zelig},
+relevant for the model to be estimated.}
+
+\item{by}{a factor variable contained in \code{data}. If supplied,
+\code{zelig} will subset
+the data frame based on the levels in the \code{by} variable, and
+estimate a model for each subset. This can save a considerable amount of
+effort. You may also use \code{by} to run models using MatchIt
+subclasses.}
+
+\item{cite}{If is set to 'TRUE' (default), the model citation will be printed
+to the console.}
+
+\item{id:}{where id is a variable which identifies the clusters. The data should be sorted
+by \code{id} and should be ordered within each cluster when appropriate}
+
+\item{corstr:}{character string specifying the correlation structure:
+"independence", "exchangeable", "ar1", "unstructured" and "userdefined"}
+
+\item{geeglm:}{See geeglm in package geepack for other function arguments}
+}
+\value{
+Depending on the class of model selected, \code{zelig} will return
+  an object with elements including \code{coefficients}, \code{residuals},
+  and \code{formula} which may be summarized using
+  \code{summary(z.out)} or individually extracted using, for example,
+  \code{coef(z.out)}. See
+  \url{http://docs.zeligproject.org/articles/getters.html} for a list of
+  functions to extract model components. You can also extract whole fitted
+  model objects using \code{\link{from_zelig_model}}.
+}
+\description{
+Generalized Estimating Equation for Logit Regression
+}
+\details{
+Additional parameters avaialable to this model include:
+\itemize{
+  \item \code{weights}: vector of weight values or a name of a variable in the dataset
+  by which to weight the model. For more information see:
+  \url{http://docs.zeligproject.org/articles/weights.html}.
+  \item \code{bootstrap}: logical or numeric. If \code{FALSE} don't use bootstraps to
+  robustly estimate uncertainty around model parameters due to sampling error.
+  If an integer is supplied, the number of boostraps to run.
+  For more information see:
+  \url{http://docs.zeligproject.org/articles/bootstraps.html}.
+}
+}
+
+\examples{
+
+data(turnout)
+turnout$cluster <- rep(c(1:200), 10)
+sorted.turnout <- turnout[order(turnout$cluster),]
+
+z.out1 <- zelig(vote ~ race + educate, model = "logit.gee",
+id = "cluster", data = sorted.turnout)
+
+summary(z.out1)
+x.out1 <- setx(z.out1)
+s.out1 <- sim(z.out1, x = x.out1)
+summary(s.out1)
+plot(s.out1)
+
+}
+\seealso{
+Vignette: \url{http://docs.zeligproject.org/articles/zelig_logitgee.html}
+}
diff --git a/man/Zelig-logit-survey-class.Rd b/man/Zelig-logit-survey-class.Rd
new file mode 100644
index 0000000..94b256a
--- /dev/null
+++ b/man/Zelig-logit-survey-class.Rd
@@ -0,0 +1,95 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/model-logit-survey.R
+\docType{class}
+\name{Zelig-logit-survey-class}
+\alias{Zelig-logit-survey-class}
+\alias{zlogitsurvey}
+\title{Logit Regression with Survey Weights}
+\arguments{
+\item{formula}{a symbolic representation of the model to be
+estimated, in the form \code{y ~ x1 + x2}, where \code{y} is the
+dependent variable and \code{x1} and \code{x2} are the explanatory
+variables, and \code{y}, \code{x1}, and \code{x2} are contained in the
+same dataset. (You may include more than two explanatory variables,
+of course.) The \code{+} symbol means ``inclusion'' not
+``addition.'' You may also include interaction terms and main
+effects in the form \code{x1*x2} without computing them in prior
+steps; \code{I(x1*x2)} to include only the interaction term and
+exclude the main effects; and quadratic terms in the form
+\code{I(x1^2)}.}
+
+\item{model}{the name of a statistical model to estimate.
+For a list of other supported models and their documentation see:
+\url{http://docs.zeligproject.org/articles/}.}
+
+\item{data}{the name of a data frame containing the variables
+referenced in the formula or a list of multiply imputed data frames
+each having the same variable names and row numbers (created by
+\code{Amelia} or \code{\link{to_zelig_mi}}).}
+
+\item{...}{additional arguments passed to \code{zelig},
+relevant for the model to be estimated.}
+
+\item{by}{a factor variable contained in \code{data}. If supplied,
+\code{zelig} will subset
+the data frame based on the levels in the \code{by} variable, and
+estimate a model for each subset. This can save a considerable amount of
+effort. You may also use \code{by} to run models using MatchIt
+subclasses.}
+
+\item{cite}{If is set to 'TRUE' (default), the model citation will be printed
+to the console.}
+
+\item{below}{(defaults to 0) The point at which the dependent variable is censored from below. If any values in the dependent variable are observed to be less than the censoring point, it is assumed that that particular observation is censored from below at the observed value. (See for a Bayesian implementation that supports both left and right censoring.)}
+
+\item{robust}{defaults to FALSE. If TRUE, zelig() computes robust standard errors based on sandwich estimators (see and ) and the options selected in cluster.}
+
+\item{if}{robust = TRUE, you may select a variable to define groups of correlated observations. Let x3 be a variable that consists of either discrete numeric values, character strings, or factors that define strata. Then
+z.out <- zelig(y ~ x1 + x2, robust = TRUE, cluster = "x3", model = "tobit", data = mydata)
+means that the observations can be correlated within the strata defined by the variable x3, and that robust standard errors should be calculated according to those clusters. If robust = TRUE but cluster is not specified, zelig() assumes that each observation falls into its own cluster.}
+}
+\value{
+Depending on the class of model selected, \code{zelig} will return
+  an object with elements including \code{coefficients}, \code{residuals},
+  and \code{formula} which may be summarized using
+  \code{summary(z.out)} or individually extracted using, for example,
+  \code{coef(z.out)}. See
+  \url{http://docs.zeligproject.org/articles/getters.html} for a list of
+  functions to extract model components. You can also extract whole fitted
+  model objects using \code{\link{from_zelig_model}}.
+}
+\description{
+Logit Regression with Survey Weights
+}
+\details{
+Additional parameters avaialable to this model include:
+\itemize{
+  \item weights: vector of weight values or a name of a variable in the dataset
+  by which to weight the model. For more information see:
+  \url{http://docs.zeligproject.org/articles/weights.html}.
+  \item bootstrap: logical or numeric. If \code{FALSE} don't use bootstraps to
+  robustly estimate uncertainty around model parameters due to sampling error.
+  If an integer is supplied, the number of boostraps to run.
+  For more information see:
+  \url{http://docs.zeligproject.org/articles/bootstraps.html}.
+}
+}
+
+\examples{
+
+data(api, package = "survey")
+apistrat$yr.rnd.numeric <- as.numeric(apistrat$yr.rnd == "Yes")
+z.out1 <- zelig(yr.rnd.numeric ~ meals + mobility, model = "logit.survey",
+               weights = apistrat$pw, data = apistrat)
+
+summary(z.out1)
+x.low <- setx(z.out1, meals= quantile(apistrat$meals, 0.2))
+x.high <- setx(z.out1, meals= quantile(apistrat$meals, 0.8))
+s.out1 <- sim(z.out1, x = x.low, x1 = x.high)
+summary(s.out1)
+plot(s.out1)
+
+}
+\seealso{
+Vignette: \url{http://docs.zeligproject.org/articles/zelig_logitsurvey.html}
+}
diff --git a/man/Zelig-lognorm-class.Rd b/man/Zelig-lognorm-class.Rd
new file mode 100644
index 0000000..89dd6a6
--- /dev/null
+++ b/man/Zelig-lognorm-class.Rd
@@ -0,0 +1,96 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/model-lognorm.R
+\docType{class}
+\name{Zelig-lognorm-class}
+\alias{Zelig-lognorm-class}
+\alias{zlognorm}
+\title{Log-Normal Regression for Duration Dependent Variables}
+\arguments{
+\item{formula}{a symbolic representation of the model to be
+estimated, in the form \code{y ~ x1 + x2}, where \code{y} is the
+dependent variable and \code{x1} and \code{x2} are the explanatory
+variables, and \code{y}, \code{x1}, and \code{x2} are contained in the
+same dataset. (You may include more than two explanatory variables,
+of course.) The \code{+} symbol means ``inclusion'' not
+``addition.'' You may also include interaction terms and main
+effects in the form \code{x1*x2} without computing them in prior
+steps; \code{I(x1*x2)} to include only the interaction term and
+exclude the main effects; and quadratic terms in the form
+\code{I(x1^2)}.}
+
+\item{model}{the name of a statistical model to estimate.
+For a list of other supported models and their documentation see:
+\url{http://docs.zeligproject.org/articles/}.}
+
+\item{data}{the name of a data frame containing the variables
+referenced in the formula or a list of multiply imputed data frames
+each having the same variable names and row numbers (created by
+\code{Amelia} or \code{\link{to_zelig_mi}}).}
+
+\item{...}{additional arguments passed to \code{zelig},
+relevant for the model to be estimated.}
+
+\item{by}{a factor variable contained in \code{data}. If supplied,
+\code{zelig} will subset
+the data frame based on the levels in the \code{by} variable, and
+estimate a model for each subset. This can save a considerable amount of
+effort. You may also use \code{by} to run models using MatchIt
+subclasses.}
+
+\item{cite}{If is set to 'TRUE' (default), the model citation will be printed
+to the console.}
+
+\item{robust}{defaults to FALSE. If TRUE, zelig() computes robust standard errors based
+on sandwich estimators (see and ) based on the options in cluster.}
+
+\item{cluster}{if robust = TRUE, you may select a variable to define groups of correlated
+observations. Let x3 be a variable that consists of either discrete numeric values, character
+strings, or factors that define strata. Then
+ means that the observations can be correlated within the strata defined by the variable x3,
+ and that robust standard errors should be calculated according to those clusters.
+ If robust = TRUE but cluster is not specified, zelig() assumes that each observation falls
+ into its own cluster.}
+}
+\value{
+Depending on the class of model selected, \code{zelig} will return
+  an object with elements including \code{coefficients}, \code{residuals},
+  and \code{formula} which may be summarized using
+  \code{summary(z.out)} or individually extracted using, for example,
+  \code{coef(z.out)}. See
+  \url{http://docs.zeligproject.org/articles/getters.html} for a list of
+  functions to extract model components. You can also extract whole fitted
+  model objects using \code{\link{from_zelig_model}}.
+}
+\description{
+Log-Normal Regression for Duration Dependent Variables
+}
+\details{
+Additional parameters avaialable to many models include:
+\itemize{
+  \item weights: vector of weight values or a name of a variable in the dataset
+  by which to weight the model. For more information see:
+  \url{http://docs.zeligproject.org/articles/weights.html}.
+  \item bootstrap: logical or numeric. If \code{FALSE} don't use bootstraps to
+  robustly estimate uncertainty around model parameters due to sampling error.
+  If an integer is supplied, the number of boostraps to run.
+  For more information see:
+  \url{http://docs.zeligproject.org/articles/bootstraps.html}.
+}
+}
+\section{Methods}{
+
+\describe{
+\item{\code{zelig(formula, data, model = NULL, ..., weights = NULL, by,
+  bootstrap = FALSE)}}{The zelig function estimates a variety of statistical models}
+}}
+
+\examples{
+library(Zelig)
+data(coalition)
+z.out <- zelig(Surv(duration, ciep12) ~ fract + numst2, model ="lognorm",  data = coalition)
+summary(z.out)
+
+}
+\seealso{
+Vignette: \url{http://docs.zeligproject.org/articles/zelig_lognorm.html}
+}
diff --git a/man/Zelig-ls-class.Rd b/man/Zelig-ls-class.Rd
new file mode 100644
index 0000000..722afd0
--- /dev/null
+++ b/man/Zelig-ls-class.Rd
@@ -0,0 +1,86 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/model-ls.R
+\docType{class}
+\name{Zelig-ls-class}
+\alias{Zelig-ls-class}
+\alias{zls}
+\title{Least Squares Regression for Continuous Dependent Variables}
+\arguments{
+\item{formula}{a symbolic representation of the model to be
+estimated, in the form \code{y ~ x1 + x2}, where \code{y} is the
+dependent variable and \code{x1} and \code{x2} are the explanatory
+variables, and \code{y}, \code{x1}, and \code{x2} are contained in the
+same dataset. (You may include more than two explanatory variables,
+of course.) The \code{+} symbol means ``inclusion'' not
+``addition.'' You may also include interaction terms and main
+effects in the form \code{x1*x2} without computing them in prior
+steps; \code{I(x1*x2)} to include only the interaction term and
+exclude the main effects; and quadratic terms in the form
+\code{I(x1^2)}.}
+
+\item{model}{the name of a statistical model to estimate.
+For a list of other supported models and their documentation see:
+\url{http://docs.zeligproject.org/articles/}.}
+
+\item{data}{the name of a data frame containing the variables
+referenced in the formula or a list of multiply imputed data frames
+each having the same variable names and row numbers (created by
+\code{Amelia} or \code{\link{to_zelig_mi}}).}
+
+\item{...}{additional arguments passed to \code{zelig},
+relevant for the model to be estimated.}
+
+\item{by}{a factor variable contained in \code{data}. If supplied,
+\code{zelig} will subset
+the data frame based on the levels in the \code{by} variable, and
+estimate a model for each subset. This can save a considerable amount of
+effort. You may also use \code{by} to run models using MatchIt
+subclasses.}
+
+\item{cite}{If is set to 'TRUE' (default), the model citation will be printed
+to the console.}
+}
+\value{
+Depending on the class of model selected, \code{zelig} will return
+  an object with elements including \code{coefficients}, \code{residuals},
+  and \code{formula} which may be summarized using
+  \code{summary(z.out)} or individually extracted using, for example,
+  \code{coef(z.out)}. See
+  \url{http://docs.zeligproject.org/articles/getters.html} for a list of
+  functions to extract model components. You can also extract whole fitted
+  model objects using \code{\link{from_zelig_model}}.
+}
+\description{
+Least Squares Regression for Continuous Dependent Variables
+}
+\details{
+Additional parameters avaialable to this model include:
+\itemize{
+  \item \code{weights}: vector of weight values or a name of a variable in the dataset
+  by which to weight the model. For more information see:
+  \url{http://docs.zeligproject.org/articles/weights.html}.
+  \item \code{bootstrap}: logical or numeric. If \code{FALSE} don't use bootstraps to
+  robustly estimate uncertainty around model parameters due to sampling error.
+  If an integer is supplied, the number of boostraps to run.
+  For more information see:
+  \url{http://docs.zeligproject.org/articles/bootstraps.html}.
+}
+}
+\section{Methods}{
+
+\describe{
+\item{\code{zelig(formula, data, model = NULL, ..., weights = NULL, by,
+  bootstrap = FALSE)}}{The zelig function estimates a variety of statistical models}
+}}
+
+\examples{
+library(Zelig)
+data(macro)
+z.out1 <- zelig(unem ~ gdp + capmob + trade, model = "ls", data = macro,
+cite = FALSE)
+summary(z.out1)
+
+}
+\seealso{
+Vignette: \url{http://docs.zeligproject.org/articles/zelig_ls.html}
+}
diff --git a/man/Zelig-ma-class.Rd b/man/Zelig-ma-class.Rd
new file mode 100644
index 0000000..c187e2b
--- /dev/null
+++ b/man/Zelig-ma-class.Rd
@@ -0,0 +1,87 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/model-ma.R
+\docType{class}
+\name{Zelig-ma-class}
+\alias{Zelig-ma-class}
+\alias{zma}
+\title{Time-Series Model with Moving Average}
+\arguments{
+\item{formula}{a symbolic representation of the model to be
+estimated, in the form \code{y ~ x1 + x2}, where \code{y} is the
+dependent variable and \code{x1} and \code{x2} are the explanatory
+variables, and \code{y}, \code{x1}, and \code{x2} are contained in the
+same dataset. (You may include more than two explanatory variables,
+of course.) The \code{+} symbol means ``inclusion'' not
+``addition.'' You may also include interaction terms and main
+effects in the form \code{x1*x2} without computing them in prior
+steps; \code{I(x1*x2)} to include only the interaction term and
+exclude the main effects; and quadratic terms in the form
+\code{I(x1^2)}.}
+
+\item{model}{the name of a statistical model to estimate.
+For a list of other supported models and their documentation see:
+\url{http://docs.zeligproject.org/articles/}.}
+
+\item{data}{the name of a data frame containing the variables
+referenced in the formula or a list of multiply imputed data frames
+each having the same variable names and row numbers (created by
+\code{Amelia} or \code{\link{to_zelig_mi}}).}
+
+\item{...}{additional arguments passed to \code{zelig},
+relevant for the model to be estimated.}
+
+\item{by}{a factor variable contained in \code{data}. If supplied,
+\code{zelig} will subset
+the data frame based on the levels in the \code{by} variable, and
+estimate a model for each subset. This can save a considerable amount of
+effort. You may also use \code{by} to run models using MatchIt
+subclasses.}
+
+\item{cite}{If is set to 'TRUE' (default), the model citation will be printed
+to the console.}
+
+\item{ts}{The name of the variable containing the time indicator. This should be passed in as
+a string. If this variable is not provided, Zelig will assume that the data is already
+ordered by time.}
+
+\item{cs}{Name of a variable that denotes the cross-sectional element of the data, for example,
+country name in a dataset with time-series across different countries. As a variable name,
+this should be in quotes. If this is not provided, Zelig will assume that all observations
+come from the same unit over time, and should be pooled, but if provided, individual models will
+be run in each cross-section.
+If \code{cs} is given as an argument, \code{ts} must also be provided. Additionally, \code{by}
+must be \code{NULL}.}
+
+\item{order}{A vector of length 3 passed in as \code{c(p,d,q)} where p represents the order of the
+autoregressive model, d represents the number of differences taken in the model, and q represents
+the order of the moving average model.}
+}
+\value{
+Depending on the class of model selected, \code{zelig} will return
+  an object with elements including \code{coefficients}, \code{residuals},
+  and \code{formula} which may be summarized using
+  \code{summary(z.out)} or individually extracted using, for example,
+  \code{coef(z.out)}. See
+  \url{http://docs.zeligproject.org/articles/getters.html} for a list of
+  functions to extract model components. You can also extract whole fitted
+  model objects using \code{\link{from_zelig_model}}.
+}
+\description{
+Warning: \code{summary} does not work with timeseries models after
+simulation.
+}
+\details{
+Currently only the Reference class syntax for time series. This model does not accept
+Bootstraps or weights.
+}
+
+\examples{
+data(seatshare)
+subset <- seatshare[seatshare$country == "UNITED KINGDOM",]
+ts.out <- zelig(formula = unemp ~ leftseat, model = "ma", ts = "year", data = subset)
+summary(ts.out)
+
+}
+\seealso{
+Vignette: \url{http://docs.zeligproject.org/articles/zelig_ma.html}
+}
diff --git a/man/Zelig-mlogit-bayes-class.Rd b/man/Zelig-mlogit-bayes-class.Rd
new file mode 100644
index 0000000..aa71ba8
--- /dev/null
+++ b/man/Zelig-mlogit-bayes-class.Rd
@@ -0,0 +1,106 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/model-mlogit-bayes.R
+\docType{class}
+\name{Zelig-mlogit-bayes-class}
+\alias{Zelig-mlogit-bayes-class}
+\alias{zmlogitbayes}
+\title{Bayesian Multinomial Logistic Regression}
+\arguments{
+\item{formula}{a symbolic representation of the model to be
+estimated, in the form \code{y ~ x1 + x2}, where \code{y} is the
+dependent variable and \code{x1} and \code{x2} are the explanatory
+variables, and \code{y}, \code{x1}, and \code{x2} are contained in the
+same dataset. (You may include more than two explanatory variables,
+of course.) The \code{+} symbol means ``inclusion'' not
+``addition.'' You may also include interaction terms and main
+effects in the form \code{x1*x2} without computing them in prior
+steps; \code{I(x1*x2)} to include only the interaction term and
+exclude the main effects; and quadratic terms in the form
+\code{I(x1^2)}.}
+
+\item{model}{the name of a statistical model to estimate.
+For a list of other supported models and their documentation see:
+\url{http://docs.zeligproject.org/articles/}.}
+
+\item{data}{the name of a data frame containing the variables
+referenced in the formula or a list of multiply imputed data frames
+each having the same variable names and row numbers (created by
+\code{Amelia} or \code{\link{to_zelig_mi}}).}
+
+\item{...}{additional arguments passed to \code{zelig},
+relevant for the model to be estimated.}
+
+\item{by}{a factor variable contained in \code{data}. If supplied,
+\code{zelig} will subset
+the data frame based on the levels in the \code{by} variable, and
+estimate a model for each subset. This can save a considerable amount of
+effort. You may also use \code{by} to run models using MatchIt
+subclasses.}
+
+\item{cite}{If is set to 'TRUE' (default), the model citation will be printed
+to the console.}
+}
+\value{
+Depending on the class of model selected, \code{zelig} will return
+  an object with elements including \code{coefficients}, \code{residuals},
+  and \code{formula} which may be summarized using
+  \code{summary(z.out)} or individually extracted using, for example,
+  \code{coef(z.out)}. See
+  \url{http://docs.zeligproject.org/articles/getters.html} for a list of
+  functions to extract model components. You can also extract whole fitted
+  model objects using \code{\link{from_zelig_model}}.
+}
+\description{
+Bayesian Multinomial Logistic Regression
+}
+\details{
+zelig() accepts the following arguments for mlogit.bayes:
+\itemize{
+    \item \code{baseline}: either a character string or numeric value (equal to
+    one of the observed values in the dependent variable) specifying a baseline category.
+    The default value is NA which sets the baseline to the first alphabetical or
+    numerical unique value of the dependent variable.
+}
+Additional parameters avaialable to this model include:
+\itemize{
+  \item \code{weights}: vector of weight values or a name of a variable in the dataset
+  by which to weight the model. For more information see:
+  \url{http://docs.zeligproject.org/articles/weights.html}.
+  \item \code{burnin}: number of the initial MCMC iterations to be discarded (defaults to 1,000).
+  \item \code{mcmc}: number of the MCMC iterations after burnin (defaults to 10,000).
+  \item \code{mcmc.method}: either "MH" or "slice", specifying whether to use Metropolis Algorithm
+  or slice sampler. The default value is MH.
+  \item \code{thin}: thinning interval for the Markov chain. Only every thin-th draw from the Markov
+  chain is kept. The value of mcmc must be divisible by this value. The default value is 1.
+  \item \code{tune}: tuning parameter for the Metropolis-Hasting step, either a scalar or a numeric
+  vector (for kk coefficients, enter a kk vector). The tuning parameter should be set such
+  that the acceptance rate is satisfactory (between 0.2 and 0.5). The default value is 1.1.
+  \item \code{verbose}: defaults to FALSE. If TRUE, the progress of the sampler (every 10\%) is
+  printed to the screen.
+  \item \code{seed}: seed for the random number generator. The default is \code{NA} which corresponds
+  to a random seed of 12345.
+  \item \code{beta.start}: starting values for the Markov chain, either a scalar or vector with
+  length equal to the number of estimated coefficients. The default is \code{NA}, such
+  that the maximum likelihood estimates are used as the starting values.
+}
+Use the following parameters to specify the model's priors:
+\itemize{
+    \item \code{b0}: prior mean for the coefficients, either a numeric vector or a scalar.
+    If a scalar value, that value will be the prior mean for all the coefficients.
+    The default is 0.
+    \item \code{B0}: prior precision parameter for the coefficients, either a square
+    matrix (with the dimensions equal to the number of the coefficients) or a scalar.
+    If a scalar value, that value times an identity matrix will be the prior precision
+    parameter. The default is 0, which leads to an improper prior.
+}
+}
+
+\examples{
+data(mexico)
+z.out <- zelig(vote88 ~ pristr + othcok + othsocok,model = "mlogit.bayes",
+data = mexico,verbose = FALSE)
+
+}
+\seealso{
+Vignette: \url{http://docs.zeligproject.org/articles/zelig_mlogitbayes.html}
+}
diff --git a/man/Zelig-mlogit-class.Rd b/man/Zelig-mlogit-class.Rd
deleted file mode 100644
index 9868aac..0000000
--- a/man/Zelig-mlogit-class.Rd
+++ /dev/null
@@ -1,17 +0,0 @@
-% Generated by roxygen2: do not edit by hand
-% Please edit documentation in R/model-mlogit.R
-\docType{class}
-\name{Zelig-mlogit-class}
-\alias{Zelig-mlogit-class}
-\alias{zmlogit}
-\title{Multinomial Logistic Regression for Dependent Variables with Unordered Categorical Values}
-\description{
-Vignette: \url{http://docs.zeligproject.org/articles/zeligchoice_mlogit.html}
-}
-\section{Methods}{
-
-\describe{
-\item{\code{zelig(formula, data, model = NULL, ..., weights = NULL, by,
-  bootstrap = FALSE)}}{The zelig function estimates a variety of statistical models}
-}}
-
diff --git a/man/Zelig-negbin-class.Rd b/man/Zelig-negbin-class.Rd
new file mode 100644
index 0000000..2bc68b1
--- /dev/null
+++ b/man/Zelig-negbin-class.Rd
@@ -0,0 +1,85 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/model-negbinom.R
+\docType{class}
+\name{Zelig-negbin-class}
+\alias{Zelig-negbin-class}
+\alias{znegbin}
+\title{Negative Binomial Regression for Event Count Dependent Variables}
+\arguments{
+\item{formula}{a symbolic representation of the model to be
+estimated, in the form \code{y ~ x1 + x2}, where \code{y} is the
+dependent variable and \code{x1} and \code{x2} are the explanatory
+variables, and \code{y}, \code{x1}, and \code{x2} are contained in the
+same dataset. (You may include more than two explanatory variables,
+of course.) The \code{+} symbol means ``inclusion'' not
+``addition.'' You may also include interaction terms and main
+effects in the form \code{x1*x2} without computing them in prior
+steps; \code{I(x1*x2)} to include only the interaction term and
+exclude the main effects; and quadratic terms in the form
+\code{I(x1^2)}.}
+
+\item{model}{the name of a statistical model to estimate.
+For a list of other supported models and their documentation see:
+\url{http://docs.zeligproject.org/articles/}.}
+
+\item{data}{the name of a data frame containing the variables
+referenced in the formula or a list of multiply imputed data frames
+each having the same variable names and row numbers (created by
+\code{Amelia} or \code{\link{to_zelig_mi}}).}
+
+\item{...}{additional arguments passed to \code{zelig},
+relevant for the model to be estimated.}
+
+\item{by}{a factor variable contained in \code{data}. If supplied,
+\code{zelig} will subset
+the data frame based on the levels in the \code{by} variable, and
+estimate a model for each subset. This can save a considerable amount of
+effort. You may also use \code{by} to run models using MatchIt
+subclasses.}
+
+\item{cite}{If is set to 'TRUE' (default), the model citation will be printed
+to the console.}
+}
+\value{
+Depending on the class of model selected, \code{zelig} will return
+  an object with elements including \code{coefficients}, \code{residuals},
+  and \code{formula} which may be summarized using
+  \code{summary(z.out)} or individually extracted using, for example,
+  \code{coef(z.out)}. See
+  \url{http://docs.zeligproject.org/articles/getters.html} for a list of
+  functions to extract model components. You can also extract whole fitted
+  model objects using \code{\link{from_zelig_model}}.
+}
+\description{
+Negative Binomial Regression for Event Count Dependent Variables
+}
+\details{
+Additional parameters avaialable to this model include:
+\itemize{
+  \item \code{weights}: vector of weight values or a name of a variable in the dataset
+  by which to weight the model. For more information see:
+  \url{http://docs.zeligproject.org/articles/weights.html}.
+  \item \code{bootstrap}: logical or numeric. If \code{FALSE} don't use bootstraps to
+  robustly estimate uncertainty around model parameters due to sampling error.
+  If an integer is supplied, the number of boostraps to run.
+  For more information see:
+  \url{http://docs.zeligproject.org/articles/bootstraps.html}.
+}
+}
+\section{Methods}{
+
+\describe{
+\item{\code{zelig(formula, data, model = NULL, ..., weights = NULL, by,
+  bootstrap = FALSE)}}{The zelig function estimates a variety of statistical models}
+}}
+
+\examples{
+library(Zelig)
+data(sanction)
+z.out <- zelig(num ~ target + coop, model = "negbin", data = sanction)
+summary(z.out)
+
+}
+\seealso{
+Vignette: \url{http://docs.zeligproject.org/articles/zelig_negbin.html}
+}
diff --git a/man/Zelig-normal-bayes-class.Rd b/man/Zelig-normal-bayes-class.Rd
new file mode 100644
index 0000000..e82d2e3
--- /dev/null
+++ b/man/Zelig-normal-bayes-class.Rd
@@ -0,0 +1,105 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/model-normal-bayes.R
+\docType{class}
+\name{Zelig-normal-bayes-class}
+\alias{Zelig-normal-bayes-class}
+\alias{znormalbayes}
+\title{Bayesian Normal Linear Regression}
+\arguments{
+\item{formula}{a symbolic representation of the model to be
+estimated, in the form \code{y ~ x1 + x2}, where \code{y} is the
+dependent variable and \code{x1} and \code{x2} are the explanatory
+variables, and \code{y}, \code{x1}, and \code{x2} are contained in the
+same dataset. (You may include more than two explanatory variables,
+of course.) The \code{+} symbol means ``inclusion'' not
+``addition.'' You may also include interaction terms and main
+effects in the form \code{x1*x2} without computing them in prior
+steps; \code{I(x1*x2)} to include only the interaction term and
+exclude the main effects; and quadratic terms in the form
+\code{I(x1^2)}.}
+
+\item{model}{the name of a statistical model to estimate.
+For a list of other supported models and their documentation see:
+\url{http://docs.zeligproject.org/articles/}.}
+
+\item{data}{the name of a data frame containing the variables
+referenced in the formula or a list of multiply imputed data frames
+each having the same variable names and row numbers (created by
+\code{Amelia} or \code{\link{to_zelig_mi}}).}
+
+\item{...}{additional arguments passed to \code{zelig},
+relevant for the model to be estimated.}
+
+\item{by}{a factor variable contained in \code{data}. If supplied,
+\code{zelig} will subset
+the data frame based on the levels in the \code{by} variable, and
+estimate a model for each subset. This can save a considerable amount of
+effort. You may also use \code{by} to run models using MatchIt
+subclasses.}
+
+\item{cite}{If is set to 'TRUE' (default), the model citation will be printed
+to the console.}
+}
+\value{
+Depending on the class of model selected, \code{zelig} will return
+  an object with elements including \code{coefficients}, \code{residuals},
+  and \code{formula} which may be summarized using
+  \code{summary(z.out)} or individually extracted using, for example,
+  \code{coef(z.out)}. See
+  \url{http://docs.zeligproject.org/articles/getters.html} for a list of
+  functions to extract model components. You can also extract whole fitted
+  model objects using \code{\link{from_zelig_model}}.
+}
+\description{
+Bayesian Normal Linear Regression
+}
+\details{
+Additional parameters avaialable to many models include:
+\itemize{
+  \item \code{weights}: vector of weight values or a name of a variable in the dataset
+  by which to weight the model. For more information see:
+  \url{http://docs.zeligproject.org/articles/weights.html}.
+  \item \code{burnin}: number of the initial MCMC iterations to be discarded (defaults to 1,000).
+  \item \code{mcmc}: number of the MCMC iterations after burnin (defaults to 10,000).
+  \item \code{thin}: thinning interval for the Markov chain. Only every thin-th draw from the Markov chain is kept. The value of mcmc must be divisible by this value. The default value is 1.
+  \item \code{verbose}: defaults to FALSE. If TRUE, the progress of the sampler (every 10\%) is printed to the screen.
+  \item \code{seed}: seed for the random number generator. The default is \code{NA} which corresponds to a random seed of 12345.
+  \item \code{beta.start}: starting values for the Markov chain, either a scalar or vector with length equal to the number of estimated coefficients. The default is \code{NA}, such that the maximum likelihood estimates are used as the starting values.
+}
+Use the following parameters to specify the model's priors:
+\itemize{
+    \item \code{b0}: prior mean for the coefficients, either a numeric vector or a scalar. If a scalar value, that value will be the prior mean for all the coefficients. The default is 0.
+    \item \code{B0}: prior precision parameter for the coefficients, either a square matrix (with the dimensions equal to the number of the coefficients) or a scalar. If a scalar value, that value times an identity matrix will be the prior precision parameter. The default is 0, which leads to an improper prior.
+    \item \code{c0}: c0/2 is the shape parameter for the Inverse Gamma prior on the variance of the disturbance terms.
+    \item \code{d0}: d0/2 is the scale parameter for the Inverse Gamma prior on the variance of the disturbance terms.
+}
+}
+
+\examples{
+data(macro)
+z.out <- zelig(unem ~ gdp + capmob + trade, model = "normal.bayes", data = macro, verbose = FALSE)
+
+
+data(macro)
+z.out <- zelig(unem ~ gdp + capmob + trade, model = "normal.bayes",
+data = macro, verbose = FALSE)
+
+z.out$geweke.diag()
+z.out$heidel.diag()
+z.out$raftery.diag()
+summary(z.out)
+
+x.out <- setx(z.out)
+s.out1 <- sim(z.out, x = x.out)
+summary(s.out1)
+
+x.high <- setx(z.out, trade = quantile(macro$trade, prob = 0.8))
+x.low <- setx(z.out, trade = quantile(macro$trade, prob = 0.2))
+
+s.out2 <- sim(z.out, x = x.high, x1 = x.low)
+summary(s.out2)
+
+}
+\seealso{
+Vignette: \url{http://docs.zeligproject.org/articles/zelig_normalbayes.html}
+}
diff --git a/man/Zelig-normal-class.Rd b/man/Zelig-normal-class.Rd
new file mode 100644
index 0000000..d7ecabf
--- /dev/null
+++ b/man/Zelig-normal-class.Rd
@@ -0,0 +1,95 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/model-normal.R
+\docType{class}
+\name{Zelig-normal-class}
+\alias{Zelig-normal-class}
+\alias{znormal}
+\title{Normal Regression for Continuous Dependent Variables}
+\arguments{
+\item{formula}{a symbolic representation of the model to be
+estimated, in the form \code{y ~ x1 + x2}, where \code{y} is the
+dependent variable and \code{x1} and \code{x2} are the explanatory
+variables, and \code{y}, \code{x1}, and \code{x2} are contained in the
+same dataset. (You may include more than two explanatory variables,
+of course.) The \code{+} symbol means ``inclusion'' not
+``addition.'' You may also include interaction terms and main
+effects in the form \code{x1*x2} without computing them in prior
+steps; \code{I(x1*x2)} to include only the interaction term and
+exclude the main effects; and quadratic terms in the form
+\code{I(x1^2)}.}
+
+\item{model}{the name of a statistical model to estimate.
+For a list of other supported models and their documentation see:
+\url{http://docs.zeligproject.org/articles/}.}
+
+\item{data}{the name of a data frame containing the variables
+referenced in the formula or a list of multiply imputed data frames
+each having the same variable names and row numbers (created by
+\code{Amelia} or \code{\link{to_zelig_mi}}).}
+
+\item{...}{additional arguments passed to \code{zelig},
+relevant for the model to be estimated.}
+
+\item{by}{a factor variable contained in \code{data}. If supplied,
+\code{zelig} will subset
+the data frame based on the levels in the \code{by} variable, and
+estimate a model for each subset. This can save a considerable amount of
+effort. You may also use \code{by} to run models using MatchIt
+subclasses.}
+
+\item{cite}{If is set to 'TRUE' (default), the model citation will be printed
+to the console.}
+
+\item{below}{(defaults to 0) The point at which the dependent variable is censored from below. If any values in the dependent variable are observed to be less than the censoring point, it is assumed that that particular observation is censored from below at the observed value. (See for a Bayesian implementation that supports both left and right censoring.)}
+
+\item{robust}{defaults to FALSE. If TRUE, zelig() computes robust standard errors based on sandwich estimators (see and ) and the options selected in cluster.}
+
+\item{if}{robust = TRUE, you may select a variable to define groups of correlated observations. Let x3 be a variable that consists of either discrete numeric values, character strings, or factors that define strata. Then
+z.out <- zelig(y ~ x1 + x2, robust = TRUE, cluster = "x3", model = "tobit", data = mydata)
+means that the observations can be correlated within the strata defined by the variable x3, and that robust standard errors should be calculated according to those clusters. If robust = TRUE but cluster is not specified, zelig() assumes that each observation falls into its own cluster.}
+
+\item{formula}{a model fitting formula}
+}
+\value{
+Depending on the class of model selected, \code{zelig} will return
+  an object with elements including \code{coefficients}, \code{residuals},
+  and \code{formula} which may be summarized using
+  \code{summary(z.out)} or individually extracted using, for example,
+  \code{coef(z.out)}. See
+  \url{http://docs.zeligproject.org/articles/getters.html} for a list of
+  functions to extract model components. You can also extract whole fitted
+  model objects using \code{\link{from_zelig_model}}.
+}
+\description{
+Normal Regression for Continuous Dependent Variables
+}
+\details{
+Additional parameters avaialable to this model include:
+\itemize{
+  \item \code{weights}: vector of weight values or a name of a variable in the dataset
+  by which to weight the model. For more information see:
+  \url{http://docs.zeligproject.org/articles/weights.html}.
+  \item \code{bootstrap}: logical or numeric. If \code{FALSE} don't use bootstraps to
+  robustly estimate uncertainty around model parameters due to sampling error.
+  If an integer is supplied, the number of boostraps to run.
+  For more information see:
+  \url{http://docs.zeligproject.org/articles/bootstraps.html}.
+}
+}
+
+\examples{
+data(macro)
+z.out1 <- zelig(unem ~ gdp + capmob + trade, model = "normal",
+data = macro)
+summary(z.out1)
+x.high <- setx(z.out1, trade = quantile(macro$trade, 0.8))
+x.low <- setx(z.out1, trade = quantile(macro$trade, 0.2))
+s.out1 <- sim(z.out1, x = x.high, x1 = x.low)
+summary(s.out1)
+plot(s.out1)
+
+
+}
+\seealso{
+Vignette: \url{http://docs.zeligproject.org/articles/zelig_normal.html}
+}
diff --git a/man/Zelig-normal-gee-class.Rd b/man/Zelig-normal-gee-class.Rd
new file mode 100644
index 0000000..7270c8a
--- /dev/null
+++ b/man/Zelig-normal-gee-class.Rd
@@ -0,0 +1,101 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/model-normal-gee.R
+\docType{class}
+\name{Zelig-normal-gee-class}
+\alias{Zelig-normal-gee-class}
+\alias{znormalgee}
+\title{Generalized Estimating Equation for Normal Regression}
+\arguments{
+\item{formula}{a symbolic representation of the model to be
+estimated, in the form \code{y ~ x1 + x2}, where \code{y} is the
+dependent variable and \code{x1} and \code{x2} are the explanatory
+variables, and \code{y}, \code{x1}, and \code{x2} are contained in the
+same dataset. (You may include more than two explanatory variables,
+of course.) The \code{+} symbol means ``inclusion'' not
+``addition.'' You may also include interaction terms and main
+effects in the form \code{x1*x2} without computing them in prior
+steps; \code{I(x1*x2)} to include only the interaction term and
+exclude the main effects; and quadratic terms in the form
+\code{I(x1^2)}.}
+
+\item{model}{the name of a statistical model to estimate.
+For a list of other supported models and their documentation see:
+\url{http://docs.zeligproject.org/articles/}.}
+
+\item{data}{the name of a data frame containing the variables
+referenced in the formula or a list of multiply imputed data frames
+each having the same variable names and row numbers (created by
+\code{Amelia} or \code{\link{to_zelig_mi}}).}
+
+\item{...}{additional arguments passed to \code{zelig},
+relevant for the model to be estimated.}
+
+\item{by}{a factor variable contained in \code{data}. If supplied,
+\code{zelig} will subset
+the data frame based on the levels in the \code{by} variable, and
+estimate a model for each subset. This can save a considerable amount of
+effort. You may also use \code{by} to run models using MatchIt
+subclasses.}
+
+\item{cite}{If is set to 'TRUE' (default), the model citation will be printed
+to the console.}
+
+\item{robust}{defaults to TRUE. If TRUE, consistent standard errors are estimated using a "sandwich"
+estimator.}
+
+\item{corstr}{defaults to "independence". It can take on the following arguments:}
+
+\item{Independence}{(corstr = independence): cor(yit,yit')=0, for all t,t' with t not equal to t'.
+It assumes that there is no correlation within the clusters and the model becomes equivalent
+ to standard normal regression. The "working" correlation matrix is the identity matrix.}
+
+\item{Fixed}{corstr = fixed): If selected, the user must define the "working" correlation
+matrix with the R argument rather than estimating it from the model.}
+
+\item{id:}{where id is a variable which identifies the clusters. The data should be sorted by
+id and should be ordered within each cluster when appropriate}
+
+\item{corstr:}{character string specifying the correlation structure: "independence",
+"exchangeable", "ar1", "unstructured" and "userdefined"}
+
+\item{geeglm:}{See geeglm in package geepack for other function arguments}
+
+\item{Mv:}{defaults to 1. It specifies the number of periods of correlation and
+only needs to be specified when \code{corstr} is stat_M_dep, non_stat_M_dep, or AR-M.}
+
+\item{R:}{defaults to NULL. It specifies a user-defined correlation matrix rather than
+estimating it from the data. The argument is used only when corstr is "fixed". The input is a TxT
+matrix of correlations, where T is the size of the largest cluster.}
+}
+\value{
+Depending on the class of model selected, \code{zelig} will return
+  an object with elements including \code{coefficients}, \code{residuals},
+  and \code{formula} which may be summarized using
+  \code{summary(z.out)} or individually extracted using, for example,
+  \code{coef(z.out)}. See
+  \url{http://docs.zeligproject.org/articles/getters.html} for a list of
+  functions to extract model components. You can also extract whole fitted
+  model objects using \code{\link{from_zelig_model}}.
+}
+\description{
+Generalized Estimating Equation for Normal Regression
+}
+\details{
+Additional parameters avaialable to this model include:
+\itemize{
+  \item \code{weights}: vector of weight values or a name of a variable in the dataset
+  by which to weight the model. For more information see:
+  \url{http://docs.zeligproject.org/articles/weights.html}.
+}
+}
+
+\examples{
+library(Zelig)
+data(macro)
+z.out <- zelig(unem ~ gdp + capmob + trade, model ="normal.gee", id = "country",
+        data = macro, corstr = "AR-M")
+summary(z.out)
+}
+\seealso{
+Vignette: \url{http://docs.zeligproject.org/articles/zelig_normalgee.html}
+}
diff --git a/man/Zelig-normal-survey-class.Rd b/man/Zelig-normal-survey-class.Rd
new file mode 100644
index 0000000..942dfae
--- /dev/null
+++ b/man/Zelig-normal-survey-class.Rd
@@ -0,0 +1,79 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/model-normal-survey.R
+\docType{class}
+\name{Zelig-normal-survey-class}
+\alias{Zelig-normal-survey-class}
+\alias{znormalsurvey}
+\title{Normal Regression for Continuous Dependent Variables with Survey Weights}
+\arguments{
+\item{formula}{a symbolic representation of the model to be
+estimated, in the form \code{y \~\, x1 + x2}, where \code{y} is the
+dependent variable and \code{x1} and \code{x2} are the explanatory
+variables, and \code{y}, \code{x1}, and \code{x2} are contained in the
+same dataset. (You may include more than two explanatory variables,
+of course.) The \code{+} symbol means ``inclusion'' not
+``addition.'' You may also include interaction terms and main
+effects in the form \code{x1*x2} without computing them in prior
+steps; \code{I(x1*x2)} to include only the interaction term and
+exclude the main effects; and quadratic terms in the form
+\code{I(x1^2)}.}
+
+\item{model}{the name of a statistical model to estimate.
+For a list of other supported models and their documentation see:
+\url{http://docs.zeligproject.org/articles/}.}
+
+\item{data}{the name of a data frame containing the variables
+referenced in the formula or a list of multiply imputed data frames
+each having the same variable names and row numbers (created by
+\code{Amelia} or \code{\link{to_zelig_mi}}).}
+
+\item{...}{additional arguments passed to \code{zelig},
+relevant for the model to be estimated.}
+
+\item{by}{a factor variable contained in \code{data}. If supplied,
+\code{zelig} will subset
+the data frame based on the levels in the \code{by} variable, and
+estimate a model for each subset. This can save a considerable amount of
+effort. You may also use \code{by} to run models using MatchIt
+subclasses.}
+
+\item{cite}{If is set to 'TRUE' (default), the model citation will be printed
+to the console.}
+}
+\value{
+Depending on the class of model selected, \code{zelig} will return
+  an object with elements including \code{coefficients}, \code{residuals},
+  and \code{formula} which may be summarized using
+  \code{summary(z.out)} or individually extracted using, for example,
+  \code{coef(z.out)}. See
+  \url{http://docs.zeligproject.org/articles/getters.html} for a list of
+  functions to extract model components. You can also extract whole fitted
+  model objects using \code{\link{from_zelig_model}}.
+}
+\description{
+Normal Regression for Continuous Dependent Variables with Survey Weights
+}
+\details{
+Additional parameters avaialable to this model include:
+\itemize{
+  \item \code{weights}: vector of weight values or a name of a variable in the dataset
+  by which to weight the model. For more information see:
+  \url{http://docs.zeligproject.org/articles/weights.html}.
+  \item \code{bootstrap}: logical or numeric. If \code{FALSE} don't use bootstraps to
+  robustly estimate uncertainty around model parameters due to sampling error.
+  If an integer is supplied, the number of boostraps to run.
+  For more information see:
+  \url{http://docs.zeligproject.org/articles/bootstraps.html}.
+}
+}
+
+\examples{
+library(Zelig)
+data(api, package = "survey")
+z.out1 <- zelig(api00 ~ meals + yr.rnd, model = "normal.survey",eights = ~pw, data = apistrat)
+summary(z.out1)
+
+}
+\seealso{
+Vignette: \url{http://docs.zeligproject.org/articles/zelig_normalsurvey.html}
+}
diff --git a/man/Zelig-obinchoice-class.Rd b/man/Zelig-obinchoice-class.Rd
deleted file mode 100644
index 8142269..0000000
--- a/man/Zelig-obinchoice-class.Rd
+++ /dev/null
@@ -1,17 +0,0 @@
-% Generated by roxygen2: do not edit by hand
-% Please edit documentation in R/model-obinchoice.R
-\docType{class}
-\name{Zelig-obinchoice-class}
-\alias{Zelig-obinchoice-class}
-\alias{zobinchoice}
-\title{Ordered Choice object for inheritance across models in ZeligChoice}
-\description{
-Ordered Choice object for inheritance across models in ZeligChoice
-}
-\section{Methods}{
-
-\describe{
-\item{\code{zelig(formula, data, model = NULL, ..., weights = NULL, by,
-  bootstrap = FALSE)}}{The zelig function estimates a variety of statistical models}
-}}
-
diff --git a/man/Zelig-ologit-class.Rd b/man/Zelig-ologit-class.Rd
deleted file mode 100644
index a55ba4e..0000000
--- a/man/Zelig-ologit-class.Rd
+++ /dev/null
@@ -1,11 +0,0 @@
-% Generated by roxygen2: do not edit by hand
-% Please edit documentation in R/model-ologit.R
-\docType{class}
-\name{Zelig-ologit-class}
-\alias{Zelig-ologit-class}
-\alias{zologit}
-\title{Ordinal Logistic Regression for Ordered Categorical Dependent Variables}
-\description{
-Vignette: \url{http://docs.zeligproject.org/articles/zeligchoice_ologit.html}
-}
-
diff --git a/man/Zelig-oprobit-bayes-class.Rd b/man/Zelig-oprobit-bayes-class.Rd
new file mode 100644
index 0000000..7272143
--- /dev/null
+++ b/man/Zelig-oprobit-bayes-class.Rd
@@ -0,0 +1,88 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/model-oprobit-bayes.R
+\docType{class}
+\name{Zelig-oprobit-bayes-class}
+\alias{Zelig-oprobit-bayes-class}
+\alias{zoprobitbayes}
+\title{Bayesian Ordered Probit Regression}
+\arguments{
+\item{formula}{a symbolic representation of the model to be
+estimated, in the form \code{y ~ x1 + x2}, where \code{y} is the
+dependent variable and \code{x1} and \code{x2} are the explanatory
+variables, and \code{y}, \code{x1}, and \code{x2} are contained in the
+same dataset. (You may include more than two explanatory variables,
+of course.) The \code{+} symbol means ``inclusion'' not
+``addition.'' You may also include interaction terms and main
+effects in the form \code{x1*x2} without computing them in prior
+steps; \code{I(x1*x2)} to include only the interaction term and
+exclude the main effects; and quadratic terms in the form
+\code{I(x1^2)}.}
+
+\item{model}{the name of a statistical model to estimate.
+For a list of other supported models and their documentation see:
+\url{http://docs.zeligproject.org/articles/}.}
+
+\item{data}{the name of a data frame containing the variables
+referenced in the formula or a list of multiply imputed data frames
+each having the same variable names and row numbers (created by
+\code{Amelia} or \code{\link{to_zelig_mi}}).}
+
+\item{...}{additional arguments passed to \code{zelig},
+relevant for the model to be estimated.}
+
+\item{by}{a factor variable contained in \code{data}. If supplied,
+\code{zelig} will subset
+the data frame based on the levels in the \code{by} variable, and
+estimate a model for each subset. This can save a considerable amount of
+effort. You may also use \code{by} to run models using MatchIt
+subclasses.}
+
+\item{cite}{If is set to 'TRUE' (default), the model citation will be printed
+to the console.}
+}
+\value{
+Depending on the class of model selected, \code{zelig} will return
+  an object with elements including \code{coefficients}, \code{residuals},
+  and \code{formula} which may be summarized using
+  \code{summary(z.out)} or individually extracted using, for example,
+  \code{coef(z.out)}. See
+  \url{http://docs.zeligproject.org/articles/getters.html} for a list of
+  functions to extract model components. You can also extract whole fitted
+  model objects using \code{\link{from_zelig_model}}.
+
+Vignette: \url{http://docs.zeligproject.org/articles/zelig_oprobitbayes.html}
+}
+\description{
+Bayesian Ordered Probit Regression
+}
+\details{
+Additional parameters avaialable to many models include:
+\itemize{
+  \item \code{weights}: vector of weight values or a name of a variable in the dataset
+  by which to weight the model. For more information see:
+  \url{http://docs.zeligproject.org/articles/weights.html}.
+  \item \code{burnin}: number of the initial MCMC iterations to be discarded (defaults to 1,000).
+  \item \code{mcmc}: number of the MCMC iterations after burnin (defaults to 10,000).
+  \item \code{thin}: thinning interval for the Markov chain. Only every thin-th draw from
+  the Markov chain is kept. The value of mcmc must be divisible by this value. The default
+  value is 1.
+  \item \code{verbose}: defaults to FALSE. If TRUE, the progress of the sampler (every 10\%)
+  is printed to the screen.
+  \item \code{seed}: seed for the random number generator. The default is \code{NA} which
+  corresponds to a random seed of 12345.
+  \item \code{beta.start}: starting values for the Markov chain, either a scalar or vector
+  with length equal to the number of estimated coefficients. The default is \code{NA}, such
+  that the maximum likelihood estimates are used as the starting values.
+}
+Use the following parameters to specify the model's priors:
+\itemize{
+    \item \code{b0}: prior mean for the coefficients, either a numeric vector or a
+    scalar. If a scalar value, that value will be the prior mean for all the
+    coefficients. The default is 0.
+    \item \code{B0}: prior precision parameter for the coefficients, either a
+    square matrix (with the dimensions equal to the number of the coefficients) or
+    a scalar. If a scalar value, that value times an identity matrix will be the
+    prior precision parameter. The default is 0, which leads to an improper prior.
+}
+}
+
diff --git a/man/Zelig-oprobit-class.Rd b/man/Zelig-oprobit-class.Rd
deleted file mode 100644
index 57d0d6d..0000000
--- a/man/Zelig-oprobit-class.Rd
+++ /dev/null
@@ -1,11 +0,0 @@
-% Generated by roxygen2: do not edit by hand
-% Please edit documentation in R/model-oprobit.R
-\docType{class}
-\name{Zelig-oprobit-class}
-\alias{Zelig-oprobit-class}
-\alias{zoprobit}
-\title{Ordinal Probit Regression for Ordered Categorical Dependent Variables}
-\description{
-Vignette: \url{http://docs.zeligproject.org/articles/zeligchoice_oprobit.html}
-}
-
diff --git a/man/Zelig-poisson-bayes-class.Rd b/man/Zelig-poisson-bayes-class.Rd
new file mode 100644
index 0000000..11525d5
--- /dev/null
+++ b/man/Zelig-poisson-bayes-class.Rd
@@ -0,0 +1,96 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/model-poisson-bayes.R
+\docType{class}
+\name{Zelig-poisson-bayes-class}
+\alias{Zelig-poisson-bayes-class}
+\alias{zpoissonbayes}
+\title{Bayesian Poisson Regression}
+\arguments{
+\item{formula}{a symbolic representation of the model to be
+estimated, in the form \code{y ~ x1 + x2}, where \code{y} is the
+dependent variable and \code{x1} and \code{x2} are the explanatory
+variables, and \code{y}, \code{x1}, and \code{x2} are contained in the
+same dataset. (You may include more than two explanatory variables,
+of course.) The \code{+} symbol means ``inclusion'' not
+``addition.'' You may also include interaction terms and main
+effects in the form \code{x1*x2} without computing them in prior
+steps; \code{I(x1*x2)} to include only the interaction term and
+exclude the main effects; and quadratic terms in the form
+\code{I(x1^2)}.}
+
+\item{model}{the name of a statistical model to estimate.
+For a list of other supported models and their documentation see:
+\url{http://docs.zeligproject.org/articles/}.}
+
+\item{data}{the name of a data frame containing the variables
+referenced in the formula or a list of multiply imputed data frames
+each having the same variable names and row numbers (created by
+\code{Amelia} or \code{\link{to_zelig_mi}}).}
+
+\item{...}{additional arguments passed to \code{zelig},
+relevant for the model to be estimated.}
+
+\item{by}{a factor variable contained in \code{data}. If supplied,
+\code{zelig} will subset
+the data frame based on the levels in the \code{by} variable, and
+estimate a model for each subset. This can save a considerable amount of
+effort. You may also use \code{by} to run models using MatchIt
+subclasses.}
+
+\item{cite}{If is set to 'TRUE' (default), the model citation will be printed
+to the console.}
+}
+\value{
+Depending on the class of model selected, \code{zelig} will return
+  an object with elements including \code{coefficients}, \code{residuals},
+  and \code{formula} which may be summarized using
+  \code{summary(z.out)} or individually extracted using, for example,
+  \code{coef(z.out)}. See
+  \url{http://docs.zeligproject.org/articles/getters.html} for a list of
+  functions to extract model components. You can also extract whole fitted
+  model objects using \code{\link{from_zelig_model}}.
+}
+\description{
+Bayesian Poisson Regression
+}
+\details{
+Additional parameters avaialable to this model include:
+\itemize{
+  \item \code{weights}: vector of weight values or a name of a variable in the dataset
+  by which to weight the model. For more information see:
+  \url{http://docs.zeligproject.org/articles/weights.html}.
+  \item \code{burnin}: number of the initial MCMC iterations to be discarded (defaults to 1,000).
+  \item \code{mcmc}: number of the MCMC iterations after burnin (defaults to 10,000).
+  \item \code{tune}: Metropolis tuning parameter, either a positive scalar or a vector of length
+  kk, where kk is the number of coefficients. The tuning parameter should be set such that the
+  acceptance rate of the Metropolis algorithm is satisfactory (typically between 0.20 and 0.5).
+  The default value is 1.1.
+  \item \code{thin}: thinning interval for the Markov chain. Only every thin-th draw from the
+   Markov chain is kept. The value of mcmc must be divisible by this value. The default value is 1.
+  \item \code{verbose}: defaults to FALSE. If TRUE, the progress of the sampler (every 10\%) is
+  printed to the screen.
+  \item \code{seed}: seed for the random number generator. The default is \code{NA} which
+  corresponds to a random seed of 12345.
+  \item \code{beta.start}: starting values for the Markov chain, either a scalar or vector
+  with length equal to the number of estimated coefficients. The default is \code{NA}, such that the maximum likelihood estimates are used as the starting values.
+}
+Use the following parameters to specify the model's priors:
+\itemize{
+    \item \code{b0}: prior mean for the coefficients, either a numeric vector or a scalar.
+    If a scalar value, that value will be the prior mean for all the coefficients.
+    The default is 0.
+    \item \code{B0}: prior precision parameter for the coefficients, either a square matrix
+    (with the dimensions equal to the number of the coefficients) or a scalar.
+    If a scalar value, that value times an identity matrix will be the prior precision parameter.
+    The default is 0, which leads to an improper prior.
+}
+}
+
+\examples{
+data(sanction)
+z.out <- zelig(num ~ target + coop, model = "poisson.bayes",data = sanction, verbose = FALSE)
+
+}
+\seealso{
+Vignette: \url{http://docs.zeligproject.org/articles/zelig_poissonbayes.html}
+}
diff --git a/man/Zelig-poisson-class.Rd b/man/Zelig-poisson-class.Rd
new file mode 100644
index 0000000..f9b8259
--- /dev/null
+++ b/man/Zelig-poisson-class.Rd
@@ -0,0 +1,85 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/model-poisson.R
+\docType{class}
+\name{Zelig-poisson-class}
+\alias{Zelig-poisson-class}
+\alias{zpoisson}
+\title{Poisson Regression for Event Count Dependent Variables}
+\arguments{
+\item{formula}{a symbolic representation of the model to be
+estimated, in the form \code{y ~ x1 + x2}, where \code{y} is the
+dependent variable and \code{x1} and \code{x2} are the explanatory
+variables, and \code{y}, \code{x1}, and \code{x2} are contained in the
+same dataset. (You may include more than two explanatory variables,
+of course.) The \code{+} symbol means ``inclusion'' not
+``addition.'' You may also include interaction terms and main
+effects in the form \code{x1*x2} without computing them in prior
+steps; \code{I(x1*x2)} to include only the interaction term and
+exclude the main effects; and quadratic terms in the form
+\code{I(x1^2)}.}
+
+\item{model}{the name of a statistical model to estimate.
+For a list of other supported models and their documentation see:
+\url{http://docs.zeligproject.org/articles/}.}
+
+\item{data}{the name of a data frame containing the variables
+referenced in the formula or a list of multiply imputed data frames
+each having the same variable names and row numbers (created by
+\code{Amelia} or \code{\link{to_zelig_mi}}).}
+
+\item{...}{additional arguments passed to \code{zelig},
+relevant for the model to be estimated.}
+
+\item{by}{a factor variable contained in \code{data}. If supplied,
+\code{zelig} will subset
+the data frame based on the levels in the \code{by} variable, and
+estimate a model for each subset. This can save a considerable amount of
+effort. You may also use \code{by} to run models using MatchIt
+subclasses.}
+
+\item{cite}{If is set to 'TRUE' (default), the model citation will be printed
+to the console.}
+
+\item{id:}{where id is a variable which identifies the clusters. The data should be sorted by id and should be ordered within each cluster when appropriate}
+
+\item{corstr:}{character string specifying the correlation structure: "independence", "exchangeable", "ar1", "unstructured" and "userdefined"}
+
+\item{geeglm:}{See geeglm in package geepack for other function arguments}
+}
+\value{
+Depending on the class of model selected, \code{zelig} will return
+  an object with elements including \code{coefficients}, \code{residuals},
+  and \code{formula} which may be summarized using
+  \code{summary(z.out)} or individually extracted using, for example,
+  \code{coef(z.out)}. See
+  \url{http://docs.zeligproject.org/articles/getters.html} for a list of
+  functions to extract model components. You can also extract whole fitted
+  model objects using \code{\link{from_zelig_model}}.
+}
+\description{
+Poisson Regression for Event Count Dependent Variables
+}
+\details{
+Additional parameters avaialable to this model include:
+\itemize{
+  \item \code{weights}: vector of weight values or a name of a variable in the dataset
+  by which to weight the model. For more information see:
+  \url{http://docs.zeligproject.org/articles/weights.html}.
+  \item \code{bootstrap}: logical or numeric. If \code{FALSE} don't use bootstraps to
+  robustly estimate uncertainty around model parameters due to sampling error.
+  If an integer is supplied, the number of boostraps to run.
+  For more information see:
+  \url{http://docs.zeligproject.org/articles/bootstraps.html}.
+}
+}
+
+\examples{
+library(Zelig)
+data(sanction)
+z.out <- zelig(num ~ target + coop, model = "poisson", data = sanction)
+summary(z.out)
+
+}
+\seealso{
+Vignette: \url{http://docs.zeligproject.org/articles/zelig_poisson.html}
+}
diff --git a/man/Zelig-poisson-gee-class.Rd b/man/Zelig-poisson-gee-class.Rd
new file mode 100644
index 0000000..dc0b433
--- /dev/null
+++ b/man/Zelig-poisson-gee-class.Rd
@@ -0,0 +1,89 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/model-poisson-gee.R
+\docType{class}
+\name{Zelig-poisson-gee-class}
+\alias{Zelig-poisson-gee-class}
+\alias{zpoissongee}
+\title{Generalized Estimating Equation for Poisson Regression}
+\arguments{
+\item{formula}{a symbolic representation of the model to be
+estimated, in the form \code{y ~ x1 + x2}, where \code{y} is the
+dependent variable and \code{x1} and \code{x2} are the explanatory
+variables, and \code{y}, \code{x1}, and \code{x2} are contained in the
+same dataset. (You may include more than two explanatory variables,
+of course.) The \code{+} symbol means ``inclusion'' not
+``addition.'' You may also include interaction terms and main
+effects in the form \code{x1*x2} without computing them in prior
+steps; \code{I(x1*x2)} to include only the interaction term and
+exclude the main effects; and quadratic terms in the form
+\code{I(x1^2)}.}
+
+\item{model}{the name of a statistical model to estimate.
+For a list of other supported models and their documentation see:
+\url{http://docs.zeligproject.org/articles/}.}
+
+\item{data}{the name of a data frame containing the variables
+referenced in the formula or a list of multiply imputed data frames
+each having the same variable names and row numbers (created by
+\code{Amelia} or \code{\link{to_zelig_mi}}).}
+
+\item{...}{additional arguments passed to \code{zelig},
+relevant for the model to be estimated.}
+
+\item{by}{a factor variable contained in \code{data}. If supplied,
+\code{zelig} will subset
+the data frame based on the levels in the \code{by} variable, and
+estimate a model for each subset. This can save a considerable amount of
+effort. You may also use \code{by} to run models using MatchIt
+subclasses.}
+
+\item{cite}{If is set to 'TRUE' (default), the model citation will be printed
+to the console.}
+
+\item{id:}{where id is a variable which identifies the clusters. The data should
+be sorted by id and should be ordered within each cluster when appropriate}
+
+\item{corstr:}{character string specifying the correlation structure: "independence",
+"exchangeable", "ar1", "unstructured" and "userdefined"}
+
+\item{geeglm:}{See geeglm in package geepack for other function arguments}
+}
+\value{
+Depending on the class of model selected, \code{zelig} will return
+  an object with elements including \code{coefficients}, \code{residuals},
+  and \code{formula} which may be summarized using
+  \code{summary(z.out)} or individually extracted using, for example,
+  \code{coef(z.out)}. See
+  \url{http://docs.zeligproject.org/articles/getters.html} for a list of
+  functions to extract model components. You can also extract whole fitted
+  model objects using \code{\link{from_zelig_model}}.
+}
+\description{
+Generalized Estimating Equation for Poisson Regression
+}
+\details{
+Additional parameters avaialable to this model include:
+\itemize{
+  \item \code{weights}: vector of weight values or a name of a variable in the dataset
+  by which to weight the model. For more information see:
+  \url{http://docs.zeligproject.org/articles/weights.html}.
+  \item \code{bootstrap}: logical or numeric. If \code{FALSE} don't use bootstraps to
+  robustly estimate uncertainty around model parameters due to sampling error.
+  If an integer is supplied, the number of boostraps to run.
+  For more information see:
+  \url{http://docs.zeligproject.org/articles/bootstraps.html}.
+}
+}
+
+\examples{
+library(Zelig)
+data(sanction)
+sanction$cluster <- c(rep(c(1:15), 5), rep(c(16), 3))
+sorted.sanction <- sanction[order(sanction$cluster),]
+z.out <- zelig(num ~ target + coop, model = "poisson.gee",id = "cluster", data = sorted.sanction)
+summary(z.out)
+
+}
+\seealso{
+Vignette: \url{http://docs.zeligproject.org/articles/zelig_poissongee.html}
+}
diff --git a/man/Zelig-poisson-survey-class.Rd b/man/Zelig-poisson-survey-class.Rd
new file mode 100644
index 0000000..5a57285
--- /dev/null
+++ b/man/Zelig-poisson-survey-class.Rd
@@ -0,0 +1,79 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/model-poisson-survey.R
+\docType{class}
+\name{Zelig-poisson-survey-class}
+\alias{Zelig-poisson-survey-class}
+\alias{zpoissonsurvey}
+\title{Poisson Regression with Survey Weights}
+\arguments{
+\item{formula}{a symbolic representation of the model to be
+estimated, in the form \code{y ~ x1 + x2}, where \code{y} is the
+dependent variable and \code{x1} and \code{x2} are the explanatory
+variables, and \code{y}, \code{x1}, and \code{x2} are contained in the
+same dataset. (You may include more than two explanatory variables,
+of course.) The \code{+} symbol means ``inclusion'' not
+``addition.'' You may also include interaction terms and main
+effects in the form \code{x1*x2} without computing them in prior
+steps; \code{I(x1*x2)} to include only the interaction term and
+exclude the main effects; and quadratic terms in the form
+\code{I(x1^2)}.}
+
+\item{model}{the name of a statistical model to estimate.
+For a list of other supported models and their documentation see:
+\url{http://docs.zeligproject.org/articles/}.}
+
+\item{data}{the name of a data frame containing the variables
+referenced in the formula or a list of multiply imputed data frames
+each having the same variable names and row numbers (created by
+\code{Amelia} or \code{\link{to_zelig_mi}}).}
+
+\item{...}{additional arguments passed to \code{zelig},
+relevant for the model to be estimated.}
+
+\item{by}{a factor variable contained in \code{data}. If supplied,
+\code{zelig} will subset
+the data frame based on the levels in the \code{by} variable, and
+estimate a model for each subset. This can save a considerable amount of
+effort. You may also use \code{by} to run models using MatchIt
+subclasses.}
+
+\item{cite}{If is set to 'TRUE' (default), the model citation will be printed
+to the console.}
+}
+\value{
+Depending on the class of model selected, \code{zelig} will return
+  an object with elements including \code{coefficients}, \code{residuals},
+  and \code{formula} which may be summarized using
+  \code{summary(z.out)} or individually extracted using, for example,
+  \code{coef(z.out)}. See
+  \url{http://docs.zeligproject.org/articles/getters.html} for a list of
+  functions to extract model components. You can also extract whole fitted
+  model objects using \code{\link{from_zelig_model}}.
+}
+\description{
+Poisson Regression with Survey Weights
+}
+\details{
+Additional parameters avaialable to this model include:
+\itemize{
+  \item \code{weights}: vector of weight values or a name of a variable in the dataset
+  by which to weight the model. For more information see:
+  \url{http://docs.zeligproject.org/articles/weights.html}.
+  \item \code{bootstrap}: logical or numeric. If \code{FALSE} don't use bootstraps to
+  robustly estimate uncertainty around model parameters due to sampling error.
+  If an integer is supplied, the number of boostraps to run.
+  For more information see:
+  \url{http://docs.zeligproject.org/articles/bootstraps.html}.
+}
+}
+
+\examples{
+library(Zelig)
+data(api, package="survey")
+z.out1 <- zelig(enroll ~ api99 + yr.rnd , model = "poisson.survey", data = apistrat)
+summary(z.out1)
+
+}
+\seealso{
+Vignette: \url{http://docs.zeligproject.org/articles/zelig_poissonsurvey.html}
+}
diff --git a/man/Zelig-probit-bayes-class.Rd b/man/Zelig-probit-bayes-class.Rd
new file mode 100644
index 0000000..a635c62
--- /dev/null
+++ b/man/Zelig-probit-bayes-class.Rd
@@ -0,0 +1,101 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/model-probit-bayes.R
+\docType{class}
+\name{Zelig-probit-bayes-class}
+\alias{Zelig-probit-bayes-class}
+\alias{zprobitbayes}
+\title{Bayesian Probit Regression}
+\arguments{
+\item{formula}{a symbolic representation of the model to be
+estimated, in the form \code{y ~ x1 + x2}, where \code{y} is the
+dependent variable and \code{x1} and \code{x2} are the explanatory
+variables, and \code{y}, \code{x1}, and \code{x2} are contained in the
+same dataset. (You may include more than two explanatory variables,
+of course.) The \code{+} symbol means ``inclusion'' not
+``addition.'' You may also include interaction terms and main
+effects in the form \code{x1*x2} without computing them in prior
+steps; \code{I(x1*x2)} to include only the interaction term and
+exclude the main effects; and quadratic terms in the form
+\code{I(x1^2)}.}
+
+\item{model}{the name of a statistical model to estimate.
+For a list of supported models and their documentation see:
+\url{http://docs.zeligproject.org/articles/}.}
+
+\item{data}{the name of a data frame containing the variables
+referenced in the formula or a list of multiply imputed data frames
+each having the same variable names and row numbers (created by
+\code{Amelia} or \code{\link{to_zelig_mi}}).}
+
+\item{...}{additional arguments passed to \code{zelig},
+relevant for the model to be estimated.}
+
+\item{by}{a factor variable contained in \code{data}. If supplied,
+\code{zelig} will subset
+the data frame based on the levels in the \code{by} variable, and
+estimate a model for each subset. This can save a considerable amount of
+effort. For example, to run the same model on all fifty states, you could
+use: \code{z.out <- zelig(y ~ x1 + x2, data = mydata, model = 'ls',
+by = 'state')} You may also use \code{by} to run models using MatchIt
+subclasses.}
+
+\item{cite}{If is set to 'TRUE' (default), the model citation will be printed
+to the console.}
+}
+\value{
+Depending on the class of model selected, \code{zelig} will return
+  an object with elements including \code{coefficients}, \code{residuals},
+  and \code{formula} which may be summarized using
+  \code{summary(z.out)} or individually extracted using, for example,
+  \code{coef(z.out)}. See
+  \url{http://docs.zeligproject.org/articles/getters.html} for a list of
+  functions to extract model components. You can also extract whole fitted
+  model objects using \code{\link{from_zelig_model}}.
+}
+\description{
+Bayesian Probit Regression
+}
+\details{
+Additional parameters avaialable to this model include:
+\itemize{
+  \item \code{weights}: vector of weight values or a name of a variable in the dataset
+  by which to weight the model. For more information see:
+  \url{http://docs.zeligproject.org/articles/weights.html}.
+  \item \code{burnin}: number of the initial MCMC iterations to be discarded (defaults to 1,000).
+  \item \code{mcmc}: number of the MCMC iterations after burnin (defaults to 10,000).
+  \item \code{thin}: thinning interval for the Markov chain. Only every thin-th draw from the
+  Markov chain is kept. The value of mcmc must be divisible by this value. The default value is 1.
+  \item \code{verbose}: defaults to FALSE. If TRUE, the progress of the sampler (every 10\%) is
+  printed to the screen.
+  \item \code{seed}: seed for the random number generator. The default is \code{NA} which
+  corresponds to a random seed of 12345.
+  \item \code{beta.start}: starting values for the Markov chain, either a scalar or vector with
+  length equal to the number of estimated coefficients. The default is \code{NA}, such that the
+  maximum likelihood estimates are used as the starting values.
+}
+Use the following parameters to specify the model's priors:
+\itemize{
+    \item \code{b0}: prior mean for the coefficients, either a numeric vector or a scalar.
+    If a scalar value, that value will be the prior mean for all the coefficients. The default is 0.
+    \item \code{B0}: prior precision parameter for the coefficients, either a square matrix (with
+    the dimensions equal to the number of the coefficients) or a scalar. If a scalar value, that
+    value times an identity matrix will be the prior precision parameter. The default is 0, which
+    leads to an improper prior.
+}
+Use the following arguments to specify optional output for the model:
+\itemize{
+    \item \code{bayes.resid}: defaults to FALSE. If TRUE, the latent Bayesian residuals for all
+    observations are returned. Alternatively, users can specify a vector of observations for
+    which the latent residuals should be returned.
+}
+}
+
+\examples{
+data(turnout)
+z.out <- zelig(vote ~ race + educate, model = "probit.bayes",data = turnout, verbose = FALSE)
+summary(z.out)
+
+}
+\seealso{
+Vignette: \url{http://docs.zeligproject.org/articles/zelig_probitbayes.html}
+}
diff --git a/man/Zelig-probit-class.Rd b/man/Zelig-probit-class.Rd
new file mode 100644
index 0000000..3981a0f
--- /dev/null
+++ b/man/Zelig-probit-class.Rd
@@ -0,0 +1,72 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/model-probit.R
+\docType{class}
+\name{Zelig-probit-class}
+\alias{Zelig-probit-class}
+\alias{zprobit}
+\title{Probit Regression for Dichotomous Dependent Variables}
+\arguments{
+\item{formula}{a symbolic representation of the model to be
+estimated, in the form \code{y ~ x1 + x2}, where \code{y} is the
+dependent variable and \code{x1} and \code{x2} are the explanatory
+variables, and \code{y}, \code{x1}, and \code{x2} are contained in the
+same dataset. (You may include more than two explanatory variables,
+of course.) The \code{+} symbol means ``inclusion'' not
+``addition.'' You may also include interaction terms and main
+effects in the form \code{x1*x2} without computing them in prior
+steps; \code{I(x1*x2)} to include only the interaction term and
+exclude the main effects; and quadratic terms in the form
+\code{I(x1^2)}.}
+
+\item{model}{the name of a statistical model to estimate.
+For a list of other supported models and their documentation see:
+\url{http://docs.zeligproject.org/articles/}.}
+
+\item{data}{the name of a data frame containing the variables
+referenced in the formula or a list of multiply imputed data frames
+each having the same variable names and row numbers (created by
+\code{Amelia} or \code{\link{to_zelig_mi}}).}
+
+\item{...}{additional arguments passed to \code{zelig},
+relevant for the model to be estimated.}
+
+\item{by}{a factor variable contained in \code{data}. If supplied,
+\code{zelig} will subset
+the data frame based on the levels in the \code{by} variable, and
+estimate a model for each subset. This can save a considerable amount of
+effort. You may also use \code{by} to run models using MatchIt
+subclasses.}
+
+\item{cite}{If is set to 'TRUE' (default), the model citation will be printed
+to the console.}
+}
+\description{
+Probit Regression for Dichotomous Dependent Variables
+}
+\details{
+Additional parameters avaialable to this model include:
+\itemize{
+  \item \code{weights}: vector of weight values or a name of a variable in the dataset
+  by which to weight the model. For more information see:
+  \url{http://docs.zeligproject.org/articles/weights.html}.
+  \item \code{bootstrap}: logical or numeric. If \code{FALSE} don't use bootstraps to
+  robustly estimate uncertainty around model parameters due to sampling error.
+  If an integer is supplied, the number of boostraps to run.
+  For more information see:
+  \url{http://docs.zeligproject.org/articles/bootstraps.html}.
+}
+}
+
+\examples{
+data(turnout)
+z.out <- zelig(vote ~ race + educate, model = "probit", data = turnout)
+summary(z.out)
+x.out <- setx(z.out)
+s.out <- sim(z.out, x = x.out)
+summary(s.out)
+plot(s.out)
+
+}
+\seealso{
+Vignette: \url{http://docs.zeligproject.org/articles/zelig_probit.html}
+}
diff --git a/man/Zelig-probit-gee-class.Rd b/man/Zelig-probit-gee-class.Rd
new file mode 100644
index 0000000..7d566f8
--- /dev/null
+++ b/man/Zelig-probit-gee-class.Rd
@@ -0,0 +1,94 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/model-probit-gee.R
+\docType{class}
+\name{Zelig-probit-gee-class}
+\alias{Zelig-probit-gee-class}
+\alias{zprobitgee}
+\title{Generalized Estimating Equation for Probit Regression}
+\arguments{
+\item{formula}{a symbolic representation of the model to be
+estimated, in the form \code{y ~ x1 + x2}, where \code{y} is the
+dependent variable and \code{x1} and \code{x2} are the explanatory
+variables, and \code{y}, \code{x1}, and \code{x2} are contained in the
+same dataset. (You may include more than two explanatory variables,
+of course.) The \code{+} symbol means ``inclusion'' not
+``addition.'' You may also include interaction terms and main
+effects in the form \code{x1*x2} without computing them in prior
+steps; \code{I(x1*x2)} to include only the interaction term and
+exclude the main effects; and quadratic terms in the form
+\code{I(x1^2)}.}
+
+\item{model}{the name of a statistical model to estimate.
+For a list of other supported models and their documentation see:
+\url{http://docs.zeligproject.org/articles/}.}
+
+\item{data}{the name of a data frame containing the variables
+referenced in the formula or a list of multiply imputed data frames
+each having the same variable names and row numbers (created by
+\code{Amelia} or \code{\link{to_zelig_mi}}).}
+
+\item{...}{additional arguments passed to \code{zelig},
+relevant for the model to be estimated.}
+
+\item{by}{a factor variable contained in \code{data}. If supplied,
+\code{zelig} will subset
+the data frame based on the levels in the \code{by} variable, and
+estimate a model for each subset. This can save a considerable amount of
+effort. You may also use \code{by} to run models using MatchIt
+subclasses.}
+
+\item{cite}{If is set to 'TRUE' (default), the model citation will be printed
+to the console.}
+
+\item{corstr:character}{string specifying the correlation structure: "independence",
+"exchangeable", "ar1", "unstructured" and "userdefined"}
+
+\item{See}{geeglm in package geepack for other function arguments.}
+
+\item{id:}{where id is a variable which identifies the clusters. The data should be
+sorted by id and should be ordered within each cluster when appropriate}
+
+\item{corstr:}{character string specifying the correlation structure: "independence",
+"exchangeable", "ar1", "unstructured" and "userdefined"}
+
+\item{geeglm:}{See geeglm in package geepack for other function arguments}
+}
+\value{
+Depending on the class of model selected, \code{zelig} will return
+  an object with elements including \code{coefficients}, \code{residuals},
+  and \code{formula} which may be summarized using
+  \code{summary(z.out)} or individually extracted using, for example,
+  \code{coef(z.out)}. See
+  \url{http://docs.zeligproject.org/articles/getters.html} for a list of
+  functions to extract model components. You can also extract whole fitted
+  model objects using \code{\link{from_zelig_model}}.
+}
+\description{
+Generalized Estimating Equation for Probit Regression
+}
+\details{
+Additional parameters avaialable to this model include:
+\itemize{
+  \item \code{weights}: vector of weight values or a name of a variable in the dataset
+  by which to weight the model. For more information see:
+  \url{http://docs.zeligproject.org/articles/weights.html}.
+  \item \code{bootstrap}: logical or numeric. If \code{FALSE} don't use bootstraps to
+  robustly estimate uncertainty around model parameters due to sampling error.
+  If an integer is supplied, the number of boostraps to run.
+  For more information see:
+  \url{http://docs.zeligproject.org/articles/bootstraps.html}.
+}
+}
+
+\examples{
+data(turnout)
+turnout$cluster <- rep(c(1:200), 10)
+sorted.turnout <- turnout[order(turnout$cluster),]
+z.out1 <- zelig(vote ~ race + educate, model = "probit.gee",
+id = "cluster", data = sorted.turnout)
+summary(z.out1)
+
+}
+\seealso{
+Vignette: \url{http://docs.zeligproject.org/articles/zelig_probitgee.html}
+}
diff --git a/man/Zelig-probit-survey-class.Rd b/man/Zelig-probit-survey-class.Rd
new file mode 100644
index 0000000..7fc3a34
--- /dev/null
+++ b/man/Zelig-probit-survey-class.Rd
@@ -0,0 +1,111 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/model-probit-survey.R
+\docType{class}
+\name{Zelig-probit-survey-class}
+\alias{Zelig-probit-survey-class}
+\alias{zprobitsurvey}
+\title{Probit Regression with Survey Weights}
+\arguments{
+\item{model}{the name of a statistical model to estimate.
+For a list of other supported models and their documentation see:
+\url{http://docs.zeligproject.org/articles/}.}
+
+\item{data}{the name of a data frame containing the variables
+referenced in the formula or a list of multiply imputed data frames
+each having the same variable names and row numbers (created by
+\code{Amelia} or \code{\link{to_zelig_mi}}).}
+
+\item{...}{additional arguments passed to \code{zelig},
+relevant for the model to be estimated.}
+
+\item{by}{a factor variable contained in \code{data}. If supplied,
+\code{zelig} will subset
+the data frame based on the levels in the \code{by} variable, and
+estimate a model for each subset. This can save a considerable amount of
+effort. You may also use \code{by} to run models using MatchIt
+subclasses.}
+
+\item{cite}{If is set to 'TRUE' (default), the model citation will be printed
+to the console.}
+
+\item{below:}{point at which the dependent variable is censored from below.
+If the dependent variable is only censored from above, set \code{below = -Inf}.
+The default value is 0.}
+
+\item{above:}{point at which the dependent variable is censored from above.
+If the dependent variable is only censored from below, set \code{above = Inf}.
+The default value is \code{Inf}.}
+}
+\value{
+Depending on the class of model selected, \code{zelig} will return
+  an object with elements including \code{coefficients}, \code{residuals},
+  and \code{formula} which may be summarized using
+  \code{summary(z.out)} or individually extracted using, for example,
+  \code{coef(z.out)}. See
+  \url{http://docs.zeligproject.org/articles/getters.html} for a list of
+  functions to extract model components. You can also extract whole fitted
+  model objects using \code{\link{from_zelig_model}}.
+}
+\description{
+@param formula a symbolic representation of the model to be
+  estimated, in the form \code{y ~ x1 + x2}, where \code{y} is the
+  dependent variable and \code{x1} and \code{x2} are the explanatory
+  variables, and \code{y}, \code{x1}, and \code{x2} are contained in the
+  same dataset. (You may include more than two explanatory variables,
+  of course.) The \code{+} symbol means ``inclusion'' not
+  ``addition.'' You may also include interaction terms and main
+  effects in the form \code{x1*x2} without computing them in prior
+  steps; \code{I(x1*x2)} to include only the interaction term and
+  exclude the main effects; and quadratic terms in the form
+  \code{I(x1^2)}.
+}
+\details{
+Additional parameters avaialable to this model include:
+\itemize{
+  \item weights: vector of weight values or a name of a variable in the dataset
+  by which to weight the model. For more information see:
+  \url{http://docs.zeligproject.org/articles/weights.html}.
+  \item burnin: number of the initial MCMC iterations to be discarded (defaults to 1,000).
+  \item mcmc: number of the MCMC iterations after burnin (defaults to 10,000).
+  \item thin: thinning interval for the Markov chain. Only every thin-th
+  draw from the Markov chain is kept. The value of mcmc must be divisible by this value.
+  The default value is 1.
+  \item verbose: defaults to FALSE. If TRUE, the progress of the sampler (every 10\%)
+  is printed to the screen.
+  \item seed: seed for the random number generator. The default is \code{NA} which
+  corresponds to a random seed of 12345.
+  \item beta.start: starting values for the Markov chain, either a scalar or
+  vector with length equal to the number of estimated coefficients. The default is
+  \code{NA}, such that the maximum likelihood estimates are used as the starting values.
+}
+Use the following parameters to specify the model's priors:
+\itemize{
+    \item b0: prior mean for the coefficients, either a numeric vector or a scalar.
+    If a scalar value, that value will be the prior mean for all the coefficients.
+    The default is 0.
+    \item B0: prior precision parameter for the coefficients, either a square matrix
+    (with the dimensions equal to the number of the coefficients) or a scalar.
+    If a scalar value, that value times an identity matrix will be the prior precision parameter.
+    The default is 0, which leads to an improper prior.
+    \item c0: c0/2 is the shape parameter for the Inverse Gamma prior on the variance of the
+    disturbance terms.
+    \item d0: d0/2 is the scale parameter for the Inverse Gamma prior on the variance of the
+    disturbance terms.
+}
+}
+
+\examples{
+data(api, package="survey")
+z.out1 <- zelig(enroll ~ api99 + yr.rnd ,
+model = "poisson.survey", data = apistrat)
+summary(z.out1)
+x.low <- setx(z.out1, api99= quantile(apistrat$api99, 0.2))
+x.high <- setx(z.out1, api99= quantile(apistrat$api99, 0.8))
+s.out1 <- sim(z.out1, x=x.low, x1=x.high)
+summary(s.out1)
+plot(s.out1)
+
+}
+\seealso{
+Vignette: \url{http://docs.zeligproject.org/articles/zelig_probitsurvey.html}
+}
diff --git a/man/Zelig-quantile-class.Rd b/man/Zelig-quantile-class.Rd
new file mode 100644
index 0000000..50f12a9
--- /dev/null
+++ b/man/Zelig-quantile-class.Rd
@@ -0,0 +1,109 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/model-quantile.R
+\docType{class}
+\name{Zelig-quantile-class}
+\alias{Zelig-quantile-class}
+\alias{zquantile}
+\title{Quantile Regression for Continuous Dependent Variables}
+\arguments{
+\item{formula}{a symbolic representation of the model to be
+estimated, in the form \code{y ~ x1 + x2}, where \code{y} is the
+dependent variable and \code{x1} and \code{x2} are the explanatory
+variables, and \code{y}, \code{x1}, and \code{x2} are contained in the
+same dataset. (You may include more than two explanatory variables,
+of course.) The \code{+} symbol means ``inclusion'' not
+``addition.'' You may also include interaction terms and main
+effects in the form \code{x1*x2} without computing them in prior
+steps; \code{I(x1*x2)} to include only the interaction term and
+exclude the main effects; and quadratic terms in the form
+\code{I(x1^2)}.}
+
+\item{model}{the name of a statistical model to estimate.
+For a list of other supported models and their documentation see:
+\url{http://docs.zeligproject.org/articles/}.}
+
+\item{data}{the name of a data frame containing the variables
+referenced in the formula or a list of multiply imputed data frames
+each having the same variable names and row numbers (created by
+\code{Amelia} or \code{\link{to_zelig_mi}}).}
+
+\item{...}{additional arguments passed to \code{zelig},
+relevant for the model to be estimated.}
+
+\item{by}{a factor variable contained in \code{data}. If supplied,
+\code{zelig} will subset
+the data frame based on the levels in the \code{by} variable, and
+estimate a model for each subset. This can save a considerable amount of
+effort. You may also use \code{by} to run models using MatchIt
+subclasses.}
+
+\item{cite}{If is set to 'TRUE' (default), the model citation will be printed
+to the console.}
+}
+\value{
+Depending on the class of model selected, \code{zelig} will return
+  an object with elements including \code{coefficients}, \code{residuals},
+  and \code{formula} which may be summarized using
+  \code{summary(z.out)} or individually extracted using, for example,
+  \code{coef(z.out)}. See
+  \url{http://docs.zeligproject.org/articles/getters.html} for a list of
+  functions to extract model components. You can also extract whole fitted
+  model objects using \code{\link{from_zelig_model}}.
+}
+\description{
+Quantile Regression for Continuous Dependent Variables
+}
+\details{
+In addition to the standard inputs, \code{zelig} takes the following additional options
+for quantile regression:
+\itemize{
+    \item \code{tau}: defaults to 0.5. Specifies the conditional quantile(s) that will be
+    estimated. 0.5 corresponds to estimating the conditional median, 0.25 and 0.75 correspond
+    to the conditional quartiles, etc. tau vectors with length greater than 1 are not currently
+    supported. If tau is set outside of the interval [0,1], zelig returns the solution for all
+    possible conditional quantiles given the data, but does not support inference on this fit
+    (setx and sim will fail).
+    \item \code{se}: a string value that defaults to "nid". Specifies the method by which
+    the covariance matrix of coefficients is estimated during the sim stage of analysis. \code{se}
+    can take the following values, which are passed to the \code{summary.rq} function from the
+    \code{quantreg} package. These descriptions are copied from the \code{summary.rq} documentation.
+    \itemize{
+        \item \code{"iid"} which presumes that the errors are iid and computes an estimate of
+        the asymptotic covariance matrix as in KB(1978).
+        \item \code{"nid"} which presumes local (in tau) linearity (in x) of the the
+        conditional quantile functions and computes a Huber sandwich estimate using a local
+        estimate of the sparsity.
+        \item \code{"ker"} which uses a kernel estimate of the sandwich as proposed by Powell(1990).
+    }
+    \item \code{...}: additional options passed to rq when fitting the model. See documentation for rq in the quantreg package for more information.
+}
+Additional parameters avaialable to this model include:
+\itemize{
+  \item \code{weights}: vector of weight values or a name of a variable in the dataset
+  by which to weight the model. For more information see:
+  \url{http://docs.zeligproject.org/articles/weights.html}.
+  \item \code{bootstrap}: logical or numeric. If \code{FALSE} don't use bootstraps to
+  robustly estimate uncertainty around model parameters due to sampling error.
+  If an integer is supplied, the number of boostraps to run.
+  For more information see:
+  \url{http://docs.zeligproject.org/articles/bootstraps.html}.
+}
+}
+\section{Methods}{
+
+\describe{
+\item{\code{zelig(formula, data, model = NULL, ..., weights = NULL, by,
+  bootstrap = FALSE)}}{The zelig function estimates a variety of statistical models}
+}}
+
+\examples{
+library(Zelig)
+data(stackloss)
+z.out1 <- zelig(stack.loss ~ Air.Flow + Water.Temp + Acid.Conc.,
+model = "rq", data = stackloss,tau = 0.5)
+summary(z.out1)
+
+}
+\seealso{
+Vignette: \url{http://docs.zeligproject.org/articles/zelig_quantile.html}
+}
diff --git a/man/Zelig-relogit-class.Rd b/man/Zelig-relogit-class.Rd
new file mode 100644
index 0000000..c524443
--- /dev/null
+++ b/man/Zelig-relogit-class.Rd
@@ -0,0 +1,103 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/model-relogit.R
+\docType{class}
+\name{Zelig-relogit-class}
+\alias{Zelig-relogit-class}
+\alias{zrelogit}
+\title{Rare Events Logistic Regression for Dichotomous Dependent Variables}
+\arguments{
+\item{formula}{a symbolic representation of the model to be
+estimated, in the form \code{y ~ x1 + x2}, where \code{y} is the
+dependent variable and \code{x1} and \code{x2} are the explanatory
+variables, and \code{y}, \code{x1}, and \code{x2} are contained in the
+same dataset. (You may include more than two explanatory variables,
+of course.) The \code{+} symbol means ``inclusion'' not
+``addition.'' You may also include interaction terms and main
+effects in the form \code{x1*x2} without computing them in prior
+steps; \code{I(x1*x2)} to include only the interaction term and
+exclude the main effects; and quadratic terms in the form
+\code{I(x1^2)}.}
+
+\item{model}{the name of a statistical model to estimate.
+For a list of other supported models and their documentation see:
+\url{http://docs.zeligproject.org/articles/}.}
+
+\item{data}{the name of a data frame containing the variables
+referenced in the formula or a list of multiply imputed data frames
+each having the same variable names and row numbers (created by
+\code{Amelia} or \code{\link{to_zelig_mi}}).}
+
+\item{...}{additional arguments passed to \code{zelig},
+relevant for the model to be estimated.}
+
+\item{by}{a factor variable contained in \code{data}. If supplied,
+\code{zelig} will subset
+the data frame based on the levels in the \code{by} variable, and
+estimate a model for each subset. This can save a considerable amount of
+effort. You may also use \code{by} to run models using MatchIt
+subclasses.}
+
+\item{cite}{If is set to 'TRUE' (default), the model citation will be printed
+to the console.}
+}
+\value{
+Depending on the class of model selected, \code{zelig} will return
+  an object with elements including \code{coefficients}, \code{residuals},
+  and \code{formula} which may be summarized using
+  \code{summary(z.out)} or individually extracted using, for example,
+  \code{coef(z.out)}. See
+  \url{http://docs.zeligproject.org/articles/getters.html} for a list of
+  functions to extract model components. You can also extract whole fitted
+  model objects using \code{\link{from_zelig_model}}.
+}
+\description{
+Rare Events Logistic Regression for Dichotomous Dependent Variables
+}
+\details{
+The relogit procedure supports four optional arguments in addition to the
+standard arguments for zelig(). You may additionally use:
+\itemize{
+    \item \code{tau}: a vector containing either one or two values for \code{tau},
+    the true population fraction of ones. Use, for example, tau = c(0.05, 0.1) to specify
+    that the lower bound on tau is 0.05 and the upper bound is 0.1. If left unspecified, only
+    finite-sample bias correction is performed, not case-control correction.
+    \item \code{case.control}: if tau is specified, choose a method to correct for case-control
+    sampling design: "prior" (default) or "weighting".
+    \item \code{bias.correct}: a logical value of \code{TRUE} (default) or \code{FALSE}
+    indicating whether the intercept should be corrected for finite sample (rare events) bias.
+}
+
+Additional parameters avaialable to many models include:
+\itemize{
+  \item \code{weights}: vector of weight values or a name of a variable in the dataset
+  by which to weight the model. For more information see:
+  \url{http://docs.zeligproject.org/articles/weights.html}.
+  \item \code{bootstrap}: logical or numeric. If \code{FALSE} don't use bootstraps to
+  robustly estimate uncertainty around model parameters due to sampling error.
+  If an integer is supplied, the number of boostraps to run.
+  For more information see:
+  \url{http://docs.zeligproject.org/articles/bootstraps.html}.
+}
+}
+\section{Methods}{
+
+\describe{
+\item{\code{modcall_formula_transformer()}}{Transform model call formula.}
+
+\item{\code{show(signif.stars = FALSE, subset = NULL, bagging = FALSE)}}{Display a Zelig object}
+
+\item{\code{zelig(formula, data, model = NULL, ..., weights = NULL, by,
+  bootstrap = FALSE)}}{The zelig function estimates a variety of statistical models}
+}}
+
+\examples{
+library(Zelig)
+data(mid)
+z.out1 <- zelig(conflict ~ major + contig + power + maxdem + mindem + years,
+              data = mid, model = "relogit", tau = 1042/303772)
+summary(z.out1)
+
+}
+\seealso{
+Vignette: \url{http://docs.zeligproject.org/articles/zelig_relogit.html}
+}
diff --git a/man/Zelig-survey-class.Rd b/man/Zelig-survey-class.Rd
new file mode 100644
index 0000000..27c6132
--- /dev/null
+++ b/man/Zelig-survey-class.Rd
@@ -0,0 +1,17 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/model-survey.R
+\docType{class}
+\name{Zelig-survey-class}
+\alias{Zelig-survey-class}
+\alias{zsurvey}
+\title{Survey models in Zelig for weights for complex sampling designs}
+\description{
+Survey models in Zelig for weights for complex sampling designs
+}
+\section{Methods}{
+
+\describe{
+\item{\code{zelig(formula, data, model = NULL, ..., weights = NULL, by,
+  bootstrap = FALSE)}}{The zelig function estimates a variety of statistical models}
+}}
+
diff --git a/man/Zelig-timeseries-class.Rd b/man/Zelig-timeseries-class.Rd
new file mode 100644
index 0000000..2a85cad
--- /dev/null
+++ b/man/Zelig-timeseries-class.Rd
@@ -0,0 +1,21 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/model-timeseries.R
+\docType{class}
+\name{Zelig-timeseries-class}
+\alias{Zelig-timeseries-class}
+\alias{ztimeseries}
+\title{Time-series models in Zelig}
+\description{
+Time-series models in Zelig
+}
+\section{Methods}{
+
+\describe{
+\item{\code{packagename()}}{Automatically retrieve wrapped package name}
+
+\item{\code{sim(num = NULL)}}{Generic Method for Computing and Organizing Simulated Quantities of Interest}
+
+\item{\code{zelig(formula, data, model = NULL, ..., weights = NULL, by,
+  bootstrap = FALSE)}}{The zelig function estimates a variety of statistical models}
+}}
+
diff --git a/man/Zelig-tobit-bayes-class.Rd b/man/Zelig-tobit-bayes-class.Rd
new file mode 100644
index 0000000..51923d7
--- /dev/null
+++ b/man/Zelig-tobit-bayes-class.Rd
@@ -0,0 +1,110 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/model-tobit-bayes.R
+\docType{class}
+\name{Zelig-tobit-bayes-class}
+\alias{Zelig-tobit-bayes-class}
+\alias{ztobitbayes}
+\title{Bayesian Tobit Regression}
+\arguments{
+\item{formula}{a symbolic representation of the model to be
+estimated, in the form \code{y ~ x1 + x2}, where \code{y} is the
+dependent variable and \code{x1} and \code{x2} are the explanatory
+variables, and \code{y}, \code{x1}, and \code{x2} are contained in the
+same dataset. (You may include more than two explanatory variables,
+of course.) The \code{+} symbol means ``inclusion'' not
+``addition.'' You may also include interaction terms and main
+effects in the form \code{x1*x2} without computing them in prior
+steps; \code{I(x1*x2)} to include only the interaction term and
+exclude the main effects; and quadratic terms in the form
+\code{I(x1^2)}.}
+
+\item{model}{the name of a statistical model to estimate.
+For a list of other supported models and their documentation see:
+\url{http://docs.zeligproject.org/articles/}.}
+
+\item{data}{the name of a data frame containing the variables
+referenced in the formula or a list of multiply imputed data frames
+each having the same variable names and row numbers (created by
+\code{Amelia} or \code{\link{to_zelig_mi}}).}
+
+\item{...}{additional arguments passed to \code{zelig},
+relevant for the model to be estimated.}
+
+\item{by}{a factor variable contained in \code{data}. If supplied,
+\code{zelig} will subset
+the data frame based on the levels in the \code{by} variable, and
+estimate a model for each subset. This can save a considerable amount of
+effort. You may also use \code{by} to run models using MatchIt
+subclasses.}
+
+\item{cite}{If is set to 'TRUE' (default), the model citation will be printed
+to the console.}
+
+\item{below:}{point at which the dependent variable is censored from below.
+If the dependent variable is only censored from above, set \code{below = -Inf}.
+The default value is 0.}
+
+\item{above:}{point at which the dependent variable is censored from above.
+If the dependent variable is only censored from below, set \code{above = Inf}.
+The default value is \code{Inf}.}
+
+\item{below:}{point at which the dependent variable is censored from below. If the dependent variable is only censored from above, set below = -Inf. The default value is 0.}
+
+\item{above:}{point at which the dependent variable is censored from above. If the dependent variable is only censored from below, set above = Inf. The default value is Inf.}
+}
+\value{
+Depending on the class of model selected, \code{zelig} will return
+  an object with elements including \code{coefficients}, \code{residuals},
+  and \code{formula} which may be summarized using
+  \code{summary(z.out)} or individually extracted using, for example,
+  \code{coef(z.out)}. See
+  \url{http://docs.zeligproject.org/articles/getters.html} for a list of
+  functions to extract model components. You can also extract whole fitted
+  model objects using \code{\link{from_zelig_model}}.
+}
+\description{
+Bayesian Tobit Regression
+}
+\details{
+Additional parameters avaialable to this model include:
+\itemize{
+  \item \code{weights}: vector of weight values or a name of a variable in the dataset
+  by which to weight the model. For more information see:
+  \url{http://docs.zeligproject.org/articles/weights.html}.
+  \item \code{burnin}: number of the initial MCMC iterations to be discarded (defaults to 1,000).
+  \item \code{mcmc}: number of the MCMC iterations after burnin (defaults to 10,000).
+  \item \code{thin}: thinning interval for the Markov chain. Only every thin-th
+  draw from the Markov chain is kept. The value of mcmc must be divisible by this value.
+  The default value is 1.
+  \item \code{verbose}: defaults to FALSE. If TRUE, the progress of the sampler (every 10\%)
+  is printed to the screen.
+  \item \code{seed}: seed for the random number generator. The default is \code{NA} which
+  corresponds to a random seed of 12345.
+  \item \code{beta.start}: starting values for the Markov chain, either a scalar or
+  vector with length equal to the number of estimated coefficients. The default is
+  \code{NA}, such that the maximum likelihood estimates are used as the starting values.
+}
+Use the following parameters to specify the model's priors:
+\itemize{
+    \item \code{b0}: prior mean for the coefficients, either a numeric vector or a scalar.
+    If a scalar value, that value will be the prior mean for all the coefficients.
+    The default is 0.
+    \item \code{B0}: prior precision parameter for the coefficients, either a square matrix
+    (with the dimensions equal to the number of the coefficients) or a scalar.
+    If a scalar value, that value times an identity matrix will be the prior precision parameter.
+    The default is 0, which leads to an improper prior.
+    \item \code{c0}: \code{c0}/2 is the shape parameter for the Inverse Gamma prior on the variance of the
+    disturbance terms.
+    \item \code{d0}: \code{d0}/2 is the scale parameter for the Inverse Gamma prior on the variance of the
+    disturbance terms.
+}
+}
+
+\examples{
+data(turnout)
+z.out <- zelig(vote ~ race + educate, model = "tobit.bayes",data = turnout, verbose = FALSE)
+
+}
+\seealso{
+Vignette: \url{http://docs.zeligproject.org/articles/zelig_tobitbayes.html}
+}
diff --git a/man/Zelig-tobit-class.Rd b/man/Zelig-tobit-class.Rd
new file mode 100644
index 0000000..7bd1c5e
--- /dev/null
+++ b/man/Zelig-tobit-class.Rd
@@ -0,0 +1,104 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/model-tobit.R
+\docType{class}
+\name{Zelig-tobit-class}
+\alias{Zelig-tobit-class}
+\alias{ztobit}
+\title{Linear Regression for a Left-Censored Dependent Variable}
+\arguments{
+\item{formula}{a symbolic representation of the model to be
+estimated, in the form \code{y ~ x1 + x2}, where \code{y} is the
+dependent variable and \code{x1} and \code{x2} are the explanatory
+variables, and \code{y}, \code{x1}, and \code{x2} are contained in the
+same dataset. (You may include more than two explanatory variables,
+of course.) The \code{+} symbol means ``inclusion'' not
+``addition.'' You may also include interaction terms and main
+effects in the form \code{x1*x2} without computing them in prior
+steps; \code{I(x1*x2)} to include only the interaction term and
+exclude the main effects; and quadratic terms in the form
+\code{I(x1^2)}.}
+
+\item{model}{the name of a statistical model to estimate.
+For a list of other supported models and their documentation see:
+\url{http://docs.zeligproject.org/articles/}.}
+
+\item{data}{the name of a data frame containing the variables
+referenced in the formula or a list of multiply imputed data frames
+each having the same variable names and row numbers (created by
+\code{Amelia} or \code{\link{to_zelig_mi}}).}
+
+\item{...}{additional arguments passed to \code{zelig},
+relevant for the model to be estimated.}
+
+\item{by}{a factor variable contained in \code{data}. If supplied,
+\code{zelig} will subset
+the data frame based on the levels in the \code{by} variable, and
+estimate a model for each subset. This can save a considerable amount of
+effort. You may also use \code{by} to run models using MatchIt
+subclasses.}
+
+\item{cite}{If is set to 'TRUE' (default), the model citation will be printed
+to the console.}
+
+\item{below}{(defaults to 0) The point at which the dependent variable is censored from below.
+If any values in the dependent variable are observed to be less than the censoring point,
+it is assumed that that particular observation is censored from below at the observed value.}
+
+\item{above}{(defaults to 0) The point at which the dependent variable is censored from above
+If any values in the dependent variable are observed to be more than the censoring point,
+it is assumed that that particular observation is censored from above at the observed value.}
+
+\item{robust}{defaults to FALSE. If TRUE, \code{zelig()} computes robust standard errors based on
+sandwich estimators and the options selected in cluster.}
+
+\item{cluster}{if robust = TRUE, you may select a variable to define groups of correlated
+observations. Let x3 be a variable that consists of either discrete numeric values, character
+strings, or factors that define strata. Then z.out <- zelig(y ~ x1 + x2, robust = TRUE,
+cluster = "x3", model = "tobit", data = mydata)means that the observations can be correlated
+within the strata defined by the variable x3, and that robust standard errors should be
+calculated according to those clusters. If robust = TRUE but cluster is not specified,
+zelig() assumes that each observation falls into its own cluster.}
+}
+\value{
+Depending on the class of model selected, \code{zelig} will return
+  an object with elements including \code{coefficients}, \code{residuals},
+  and \code{formula} which may be summarized using
+  \code{summary(z.out)} or individually extracted using, for example,
+  \code{coef(z.out)}. See
+  \url{http://docs.zeligproject.org/articles/getters.html} for a list of
+  functions to extract model components. You can also extract whole fitted
+  model objects using \code{\link{from_zelig_model}}.
+}
+\description{
+Linear Regression for a Left-Censored Dependent Variable
+}
+\details{
+Additional parameters avaialable to this model include:
+\itemize{
+  \item \code{weights}: vector of weight values or a name of a variable in the dataset
+  by which to weight the model. For more information see:
+  \url{http://docs.zeligproject.org/articles/weights.html}.
+  \item \code{bootstrap}: logical or numeric. If \code{FALSE} don't use bootstraps to
+  robustly estimate uncertainty around model parameters due to sampling error.
+  If an integer is supplied, the number of boostraps to run.
+  For more information see:
+  \url{http://docs.zeligproject.org/articles/bootstraps.html}.
+}
+}
+\section{Methods}{
+
+\describe{
+\item{\code{zelig(formula, data, model = NULL, ..., weights = NULL, by,
+  bootstrap = FALSE)}}{The zelig function estimates a variety of statistical models}
+}}
+
+\examples{
+library(Zelig)
+data(tobin)
+z.out <- zelig(durable ~ age + quant, model = "tobit", data = tobin)
+summary(z.out)
+
+}
+\seealso{
+Vignette: \url{http://docs.zeligproject.org/articles/zelig_tobit.html}
+}
diff --git a/man/Zelig-weibull-class.Rd b/man/Zelig-weibull-class.Rd
new file mode 100644
index 0000000..4c07d3e
--- /dev/null
+++ b/man/Zelig-weibull-class.Rd
@@ -0,0 +1,100 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/model-weibull.R
+\docType{class}
+\name{Zelig-weibull-class}
+\alias{Zelig-weibull-class}
+\alias{zweibull}
+\title{Weibull Regression for Duration Dependent Variables}
+\arguments{
+\item{formula}{a symbolic representation of the model to be
+estimated, in the form \code{y ~ x1 + x2}, where \code{y} is the
+dependent variable and \code{x1} and \code{x2} are the explanatory
+variables, and \code{y}, \code{x1}, and \code{x2} are contained in the
+same dataset. (You may include more than two explanatory variables,
+of course.) The \code{+} symbol means ``inclusion'' not
+``addition.'' You may also include interaction terms and main
+effects in the form \code{x1*x2} without computing them in prior
+steps; \code{I(x1*x2)} to include only the interaction term and
+exclude the main effects; and quadratic terms in the form
+\code{I(x1^2)}.}
+
+\item{model}{the name of a statistical model to estimate.
+For a list of other supported models and their documentation see:
+\url{http://docs.zeligproject.org/articles/}.}
+
+\item{data}{the name of a data frame containing the variables
+referenced in the formula or a list of multiply imputed data frames
+each having the same variable names and row numbers (created by
+\code{Amelia} or \code{\link{to_zelig_mi}}).}
+
+\item{...}{additional arguments passed to \code{zelig},
+relevant for the model to be estimated.}
+
+\item{by}{a factor variable contained in \code{data}. If supplied,
+\code{zelig} will subset
+the data frame based on the levels in the \code{by} variable, and
+estimate a model for each subset. This can save a considerable amount of
+effort. You may also use \code{by} to run models using MatchIt
+subclasses.}
+
+\item{cite}{If is set to 'TRUE' (default), the model citation will be printed
+to the console.}
+}
+\value{
+Depending on the class of model selected, \code{zelig} will return
+  an object with elements including \code{coefficients}, \code{residuals},
+  and \code{formula} which may be summarized using
+  \code{summary(z.out)} or individually extracted using, for example,
+  \code{coef(z.out)}. See
+  \url{http://docs.zeligproject.org/articles/getters.html} for a list of
+  functions to extract model components. You can also extract whole fitted
+  model objects using \code{\link{from_zelig_model}}.
+}
+\description{
+Weibull Regression for Duration Dependent Variables
+}
+\details{
+In addition to the standard inputs, zelig() takes the following
+additional options for weibull regression:
+\itemize{
+    \item \code{robust}: defaults to FALSE. If TRUE, zelig() computes
+    robust standard errors based on sandwich estimators based on the options in cluster.
+    \item \code{cluste}r: if \code{robust = TRUE}, you may select a variable
+    to define groups of correlated observations. Let x3 be a variable
+    that consists of either discrete numeric values, character strings,
+     or factors that define strata. Then
+             \code{z.out <- zelig(y ~ x1 + x2, robust = TRUE, cluster = "x3",
+               model = "exp", data = mydata)}
+    means that the observations can be correlated within the strata defined
+    by the variable x3, and that robust standard errors should be calculated according to
+    those clusters. If robust=TRUErobust=TRUE but cluster is not specified, zelig() assumes
+    that each observation falls into its own cluster.
+}
+
+Additional parameters avaialable to this model include:
+\itemize{
+  \item weights: vector of weight values or a name of a variable in the dataset
+  by which to weight the model. For more information see:
+  \url{http://docs.zeligproject.org/articles/weights.html}.
+  \item bootstrap: logical or numeric. If \code{FALSE} don't use bootstraps to
+  robustly estimate uncertainty around model parameters due to sampling error.
+  If an integer is supplied, the number of boostraps to run.
+  For more information see:
+  \url{http://docs.zeligproject.org/articles/bootstraps.html}.
+}
+}
+\section{Methods}{
+
+\describe{
+\item{\code{zelig(formula, data, model = NULL, ..., weights = NULL, by,
+  bootstrap = FALSE)}}{The zelig function estimates a variety of statistical models}
+}}
+
+\examples{
+data(coalition)
+z.out <- zelig(Surv(duration, ciep12) ~ fract + numst2,model = "weibull", data = coalition)
+
+}
+\seealso{
+Vignette: \url{http://docs.zeligproject.org/articles/zelig_weibull.html}
+}
diff --git a/man/Zelig.url.Rd b/man/Zelig.url.Rd
new file mode 100644
index 0000000..1f449c9
--- /dev/null
+++ b/man/Zelig.url.Rd
@@ -0,0 +1,15 @@
+\name{Zelig.url}
+
+\alias{Zelig.url}
+
+\title{Table of links for Zelig}
+
+\description{
+  Table of links for \code{help.zelig} for the core Zelig package.  
+}
+
+\keyword{datasets}
+
+
+
+
diff --git a/man/approval.Rd b/man/approval.Rd
new file mode 100644
index 0000000..e5eeb74
--- /dev/null
+++ b/man/approval.Rd
@@ -0,0 +1,27 @@
+\name{approval}
+\alias{approval}
+
+\title{U.S. Presidential Approval Data}
+
+\description{
+  Monthy public opinion data for 2001-2006.  
+}
+
+\usage{data(approval)}
+
+\format{
+  A table containing 8 variables ("month", "year", "approve", 
+  "disapprove", "unsure", "sept.oct.2001", "iraq.war", and "avg.price")
+  and 65 observations.  }
+
+\source{ICPSR}
+
+\references{
+  Stuff here
+}
+
+\keyword{datasets}
+
+
+
+
diff --git a/man/avg.Rd b/man/avg.Rd
new file mode 100644
index 0000000..66ff20d
--- /dev/null
+++ b/man/avg.Rd
@@ -0,0 +1,17 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/utils.R
+\name{avg}
+\alias{avg}
+\title{Compute central tendancy as approrpriate to data type}
+\usage{
+avg(val)
+}
+\arguments{
+\item{val}{a vector of values}
+}
+\value{
+a mean (if numeric) or a median (if ordered) or mode (otherwise)
+}
+\description{
+Compute central tendancy as approrpriate to data type
+}
diff --git a/man/bivariate.Rd b/man/bivariate.Rd
new file mode 100644
index 0000000..db1ef7a
--- /dev/null
+++ b/man/bivariate.Rd
@@ -0,0 +1,24 @@
+\name{bivariate}
+
+\alias{bivariate}
+
+\title{Sample data for bivariate probit regression}
+
+\description{
+  Sample data for the bivariate probit regression.  
+}
+
+\usage{data(bivariate)}
+
+\format{A table containing 6 variables ("y1", "y2", "x1", 
+"x2", "x3", and "x4") and 78 observations.}
+
+\source{This is a cleaned and relabelled version of the sanction data
+  set, available in Zelig.}
+
+\references{
+  Martin, Lisa (1992).  \emph{Coercive Cooperation: Explaining Multilateral
+    Economic Sanctions}, Princeton: Princeton University Press.
+}
+
+\keyword{datasets}
diff --git a/man/ci.plot.Rd b/man/ci.plot.Rd
new file mode 100644
index 0000000..a72bc99
--- /dev/null
+++ b/man/ci.plot.Rd
@@ -0,0 +1,62 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/plots.R
+\name{ci.plot}
+\alias{ci.plot}
+\title{Method for plotting qi simulations across a range within a variable, with confidence intervals}
+\usage{
+ci.plot(obj, qi="ev", var=NULL, ..., main = NULL, sub =
+ NULL, xlab = NULL, ylab = NULL, xlim = NULL, ylim =
+ NULL, legcol="gray20", col=NULL, leg=1, legpos=
+ NULL, ci = c(80, 95, 99.9), discont=NULL)
+}
+\arguments{
+\item{obj}{A reference class zelig5 object}
+
+\item{qi}{a character-string specifying the quantity of interest to plot}
+
+\item{var}{The variable to be used on the x-axis. Default is the variable
+across all the chosen values with smallest nonzero variance}
+
+\item{...}{Parameters to be passed to the `truehist' function which is
+implicitly called for numeric simulations}
+
+\item{main}{a character-string specifying the main heading of the plot}
+
+\item{sub}{a character-string specifying the sub heading of the plot}
+
+\item{xlab}{a character-string specifying the label for the x-axis}
+
+\item{ylab}{a character-string specifying the label for the y-axis}
+
+\item{xlim}{Limits to the x-axis}
+
+\item{ylim}{Limits to the y-axis}
+
+\item{legcol}{``legend color'', an valid color used for plotting the line
+colors in the legend}
+
+\item{col}{a valid vector of colors of at least length 3 to use to color the
+confidence intervals}
+
+\item{leg}{``legend position'', an integer from 1 to 4, specifying the
+position of the legend. 1 to 4 correspond to ``SE'', ``SW'', ``NW'', and
+``NE'' respectively.  Setting to 0 or ``n'' turns off the legend.}
+
+\item{legpos}{``legend type'', exact coordinates and sizes for legend.
+Overrides argment ``leg.type''}
+
+\item{ci}{vector of length three of confidence interval levels to draw.}
+
+\item{discont}{optional point of discontinuity along the x-axis at which
+to interupt the graph}
+}
+\value{
+the current graphical parameters. This is subject to change in future
+implementations of Zelig
+}
+\description{
+Method for plotting qi simulations across a range within a variable, with confidence intervals
+}
+\author{
+James Honaker
+}
diff --git a/man/ci_check.Rd b/man/ci_check.Rd
new file mode 100644
index 0000000..7d31be0
--- /dev/null
+++ b/man/ci_check.Rd
@@ -0,0 +1,16 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/interface.R
+\name{ci_check}
+\alias{ci_check}
+\title{Convert \code{ci} interval from percent to proportion and check if valid}
+\usage{
+ci_check(x)
+}
+\arguments{
+\item{x}{numeric. The central interval to return, expressed on the \code{(0, 100]}
+or the equivalent \code{(0, 1]} interval.}
+}
+\description{
+Convert \code{ci} interval from percent to proportion and check if valid
+}
+\keyword{internal}
diff --git a/man/cluster.formula.Rd b/man/cluster.formula.Rd
new file mode 100644
index 0000000..55f9af8
--- /dev/null
+++ b/man/cluster.formula.Rd
@@ -0,0 +1,20 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/utils.R
+\name{cluster.formula}
+\alias{cluster.formula}
+\title{Generate Formulae that Consider Clustering}
+\usage{
+cluster.formula(formula, cluster)
+}
+\arguments{
+\item{formula}{a formula object}
+
+\item{cluster}{a vector}
+}
+\value{
+a formula object describing clustering
+}
+\description{
+This method is used internally by the "Zelig" Package to interpret
+clustering in GEE models.
+}
diff --git a/man/coalition2.Rd b/man/coalition2.Rd
new file mode 100644
index 0000000..adabea1
--- /dev/null
+++ b/man/coalition2.Rd
@@ -0,0 +1,32 @@
+\name{coalition2}
+\alias{coalition2}
+\docType{data}
+
+\title{Coalition Dissolution in Parliamentary Democracies, Modified Version}
+\description{
+ This data set contains survival data on government coalitions in
+  parliamentary democracies (Belgium, Canada, Denmark, Finland, France,
+  Iceland, Ireland, Israel, Italy, Netherlands, Norway, Portugal, Spain,
+  Sweden, and the United Kingdom) for the period 1945-1987.  Country indicator variables are included in the sample data.
+}
+\usage{data(coalition2)}
+\format{
+  A data frame containing 8 variables ("duration", "ciep12", "invest",
+  "fract", "polar", "numst2", "crisis", "country") and 314 observations.  For
+  variable descriptions, please refer to King, Alt, Burns and Laver
+  (1990).
+}
+
+\source{ICPSR}
+
+\references{
+  King, Gary, James E. Alt, Nancy Elizabeth Burns and Michael Laver (1990).
+  ``A Unified Model  of Cabinet Dissolution in Parliamentary
+  Democracies,'' \emph{American Journal of Political Science}, vol. 34,
+  no. 3, pp. 846-870.
+
+  Gary King, James E. Alt, Nancy Burns, and Michael Laver.  ICPSR
+  Publication Related Archive, 1115.
+}
+
+\keyword{datasets}
diff --git a/man/coef-Zelig-method.Rd b/man/coef-Zelig-method.Rd
new file mode 100644
index 0000000..136df32
--- /dev/null
+++ b/man/coef-Zelig-method.Rd
@@ -0,0 +1,15 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/model-zelig.R
+\docType{methods}
+\name{coef,Zelig-method}
+\alias{coef,Zelig-method}
+\title{Method for extracting estimated coefficients from Zelig objects}
+\usage{
+\S4method{coef}{Zelig}(object)
+}
+\arguments{
+\item{object}{An Object of Class Zelig}
+}
+\description{
+Method for extracting estimated coefficients from Zelig objects
+}
diff --git a/man/coefficients-Zelig-method.Rd b/man/coefficients-Zelig-method.Rd
new file mode 100644
index 0000000..d3bb4ee
--- /dev/null
+++ b/man/coefficients-Zelig-method.Rd
@@ -0,0 +1,15 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/model-zelig.R
+\docType{methods}
+\name{coefficients,Zelig-method}
+\alias{coefficients,Zelig-method}
+\title{Method for extracting estimated coefficients from Zelig objects}
+\usage{
+\S4method{coefficients}{Zelig}(object)
+}
+\arguments{
+\item{object}{An Object of Class Zelig}
+}
+\description{
+Method for extracting estimated coefficients from Zelig objects
+}
diff --git a/man/combine_coef_se.Rd b/man/combine_coef_se.Rd
new file mode 100644
index 0000000..dc5c036
--- /dev/null
+++ b/man/combine_coef_se.Rd
@@ -0,0 +1,65 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/utils.R
+\name{combine_coef_se}
+\alias{combine_coef_se}
+\title{Combines estimated coefficients and associated statistics
+from models estimated with multiply imputed data sets or bootstrapped}
+\source{
+Partially based on \code{\link{mi.meld}} from Amelia.
+}
+\usage{
+combine_coef_se(obj, out_type = "matrix", bagging = FALSE,
+  messages = TRUE)
+}
+\arguments{
+\item{obj}{a zelig object with an estimated model}
+
+\item{out_type}{either \code{"matrix"} or \code{"list"} specifying
+whether the results should be returned as a matrix or a list.}
+
+\item{bagging}{logical whether or not to bag the bootstrapped coefficients}
+
+\item{messages}{logical whether or not to return messages for what is being
+returned}
+}
+\value{
+If the model uses multiply imputed or bootstrapped data then a
+ matrix (default) or list of combined coefficients (\code{coef}), standard
+ errors (\code{se}), z values (\code{zvalue}), p-values (\code{p}) is
+ returned. Rubin's Rules are used to combine output from multiply imputed
+ data. An error is returned if no imputations were included or there wasn't
+ bootstrapping. Please use \code{get_coef}, \code{get_se}, and
+ \code{get_pvalue} methods instead in cases where there are no imputations or
+ bootstrap.
+}
+\description{
+Combines estimated coefficients and associated statistics
+from models estimated with multiply imputed data sets or bootstrapped
+}
+\examples{
+set.seed(123)
+
+## Multiple imputation example
+# Create fake imputed data
+n <- 100
+x1 <- runif(n)
+x2 <- runif(n)
+y <- rnorm(n)
+data.1 <- data.frame(y = y, x = x1)
+data.2 <- data.frame(y = y, x = x2)
+
+# Estimate model
+mi.out <- to_zelig_mi(data.1, data.2)
+z.out.mi <- zelig(y ~ x, model = "ls", data = mi.out)
+
+# Combine and extract coefficients and standard errors
+combine_coef_se(z.out.mi)
+
+## Bootstrap example
+z.out.boot <- zelig(y ~ x, model = "ls", data = data.1, bootstrap = 20)
+combine_coef_se(z.out.boot)
+
+}
+\author{
+Christopher Gandrud and James Honaker
+}
diff --git a/man/construct.v.Rd b/man/construct.v.Rd
deleted file mode 100644
index 44b8634..0000000
--- a/man/construct.v.Rd
+++ /dev/null
@@ -1,23 +0,0 @@
-% Generated by roxygen2: do not edit by hand
-% Please edit documentation in R/model-mlogit.R
-\name{construct.v}
-\alias{construct.v}
-\title{Split Names of Vectors into N-vectors
-This function is used to organize how variables are spread
-across the list of formulas}
-\usage{
-construct.v(constraints, ndim)
-}
-\arguments{
-\item{constraints}{a constraints object}
-
-\item{ndim}{an integer specifying the number of dimensions}
-}
-\value{
-a list of character-vectors
-}
-\description{
-Split Names of Vectors into N-vectors
-This function is used to organize how variables are spread
-across the list of formulas
-}
diff --git a/man/createJSON.Rd b/man/createJSON.Rd
new file mode 100644
index 0000000..5da6e3b
--- /dev/null
+++ b/man/createJSON.Rd
@@ -0,0 +1,19 @@
+\name{createJSON}
+\alias{createJSON}
+\title{Utility function for constructing JSON file that encodes the hierarchy of available statistical models in Zelig}
+\usage{
+createJSON(movefile=TRUE)
+}
+\arguments{
+\item{movefile}{Logical of whether to (TRUE) move the JSON file into path \code{./inst/JSON} or (FALSE) leave in working directory.}
+}
+\value{
+Returns TRUE on successful completion of json file
+}
+\description{
+Utility function for construction a JSON file that encodes the hierarchy of available statistical models.  
+}
+\author{
+Christine Choirat, Vito D'Orazio, James Honaker
+}
+
diff --git a/man/createJSONzeligchoice.Rd b/man/createJSONzeligchoice.Rd
deleted file mode 100644
index 48f85a7..0000000
--- a/man/createJSONzeligchoice.Rd
+++ /dev/null
@@ -1,15 +0,0 @@
-\name{createJSONzeligchoice}
-\alias{createJSONzeligchoice}
-\title{Utility function for constructing JSON file that encodes the hierarchy of available statistical models in ZeligChoice}
-\usage{
-createJSONzeligchoice()
-}
-\value{
-Returns TRUE on successful completion of json file
-}
-\description{
-Utility function for construction a JSON file that encodes the hierarchy of available statistical models.  
-}
-\author{
-Christine Choirat, Vito D'Orazio
-}
\ No newline at end of file
diff --git a/man/df.residual-Zelig-method.Rd b/man/df.residual-Zelig-method.Rd
new file mode 100644
index 0000000..3470b08
--- /dev/null
+++ b/man/df.residual-Zelig-method.Rd
@@ -0,0 +1,15 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/model-zelig.R
+\docType{methods}
+\name{df.residual,Zelig-method}
+\alias{df.residual,Zelig-method}
+\title{Method for extracting residual degrees-of-freedom from Zelig objects}
+\usage{
+\S4method{df.residual}{Zelig}(object)
+}
+\arguments{
+\item{object}{An Object of Class Zelig}
+}
+\description{
+Method for extracting residual degrees-of-freedom from Zelig objects
+}
diff --git a/man/eidat.Rd b/man/eidat.Rd
new file mode 100644
index 0000000..c953306
--- /dev/null
+++ b/man/eidat.Rd
@@ -0,0 +1,19 @@
+\name{eidat}
+
+\alias{eidat}
+
+\title{Simulation Data for Ecological Inference}
+
+\description{
+  This dataframe contains a simulated data set to illustrate the models
+  for ecological inference.  
+}
+
+\usage{data(eidat)}
+
+\format{
+  A table containing 4 variables ("t0", "t1", "x0", "x1") and 10 
+observations.
+}
+
+\keyword{datasets}
diff --git a/man/ev.mlogit.Rd b/man/ev.mlogit.Rd
deleted file mode 100644
index cbd952c..0000000
--- a/man/ev.mlogit.Rd
+++ /dev/null
@@ -1,27 +0,0 @@
-% Generated by roxygen2: do not edit by hand
-% Please edit documentation in R/model-mlogit.R
-\name{ev.mlogit}
-\alias{ev.mlogit}
-\title{Simulate Expected Value for Multinomial Logit}
-\usage{
-ev.mlogit(fitted, constraints, all.coef, x, ndim, cnames)
-}
-\arguments{
-\item{fitted}{a fitted model object}
-
-\item{constraints}{a constraints object}
-
-\item{all.coef}{all the coeficients}
-
-\item{x}{a setx object}
-
-\item{ndim}{an integer specifying the number of dimensions}
-
-\item{cnames}{a character-vector specifying the names of the columns}
-}
-\value{
-a matrix of simulated values
-}
-\description{
-Simulate Expected Value for Multinomial Logit
-}
diff --git a/man/expand_grid_setrange.Rd b/man/expand_grid_setrange.Rd
new file mode 100644
index 0000000..d2f6bea
--- /dev/null
+++ b/man/expand_grid_setrange.Rd
@@ -0,0 +1,15 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/utils.R
+\name{expand_grid_setrange}
+\alias{expand_grid_setrange}
+\title{Convenience function for setrange and setrange1}
+\usage{
+expand_grid_setrange(x)
+}
+\arguments{
+\item{x}{data passed to setrange or setrange1}
+}
+\description{
+Convenience function for setrange and setrange1
+}
+\keyword{internal}
diff --git a/man/extract_setrange.Rd b/man/extract_setrange.Rd
new file mode 100644
index 0000000..0b95c11
--- /dev/null
+++ b/man/extract_setrange.Rd
@@ -0,0 +1,27 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/interface.R
+\name{extract_setrange}
+\alias{extract_setrange}
+\title{Extract setrange to return as tidy formatted data frame}
+\usage{
+extract_setrange(obj, which_range = "range", only_setx = FALSE)
+}
+\arguments{
+\item{obj}{a zelig object containing a range of simulated quantities of
+interest}
+
+\item{which_range}{character string either \code{'range'} or \code{'range1'}
+indicating whether to extract the first or second set of fitted values}
+
+\item{only_setx}{logical whether or not to only extract `setx`` values.}
+}
+\description{
+Extract setrange to return as tidy formatted data frame
+}
+\seealso{
+\code{\link{zelig_qi_to_df}}
+}
+\author{
+Christopher Gandrud
+}
+\keyword{internal}
diff --git a/man/extract_setx.Rd b/man/extract_setx.Rd
new file mode 100644
index 0000000..b545230
--- /dev/null
+++ b/man/extract_setx.Rd
@@ -0,0 +1,26 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/interface.R
+\name{extract_setx}
+\alias{extract_setx}
+\title{Extract setx for non-range and return tidy formatted data frame}
+\usage{
+extract_setx(obj, which_x = "x", only_setx = FALSE)
+}
+\arguments{
+\item{obj}{a zelig object containing simulated quantities of interest}
+
+\item{which_x}{character string either \code{'x'} or \code{'x1'} indicating whether
+to extract the first or second set of fitted values}
+
+\item{only_setx}{logical whether or not to only extract `setx`` values.}
+}
+\description{
+Extract setx for non-range and return tidy formatted data frame
+}
+\seealso{
+\code{\link{zelig_qi_to_df}}
+}
+\author{
+Christopher Gandrud
+}
+\keyword{internal}
diff --git a/man/factor_coef_combine.Rd b/man/factor_coef_combine.Rd
new file mode 100644
index 0000000..4390422
--- /dev/null
+++ b/man/factor_coef_combine.Rd
@@ -0,0 +1,22 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/interface.R
+\name{factor_coef_combine}
+\alias{factor_coef_combine}
+\title{Return individual factor coefficient fitted values to single factor variable}
+\usage{
+factor_coef_combine(obj, fitted)
+}
+\arguments{
+\item{obj}{a zelig object with an estimated model}
+
+\item{fitted}{a data frame with values fitted by \code{setx}. Note
+created internally by \code{\link{extract_setx}} and
+  \code{\link{extract_setrange}}}
+}
+\description{
+Return individual factor coefficient fitted values to single factor variable
+}
+\author{
+Christopher Gandrud
+}
+\keyword{internal}
diff --git a/man/figures/example_plot_ci_plot-1.png b/man/figures/example_plot_ci_plot-1.png
new file mode 100644
index 0000000..909221e
Binary files /dev/null and b/man/figures/example_plot_ci_plot-1.png differ
diff --git a/man/figures/example_plot_graph-1.png b/man/figures/example_plot_graph-1.png
new file mode 100644
index 0000000..c42310f
Binary files /dev/null and b/man/figures/example_plot_graph-1.png differ
diff --git a/man/figures/img/zelig_models_thumb.png b/man/figures/img/zelig_models_thumb.png
new file mode 100644
index 0000000..e836e74
Binary files /dev/null and b/man/figures/img/zelig_models_thumb.png differ
diff --git a/man/figures/img/zelig_poster.jpeg b/man/figures/img/zelig_poster.jpeg
new file mode 100644
index 0000000..c705845
Binary files /dev/null and b/man/figures/img/zelig_poster.jpeg differ
diff --git a/man/figures/zelig.png b/man/figures/zelig.png
new file mode 100644
index 0000000..bac36ef
Binary files /dev/null and b/man/figures/zelig.png differ
diff --git a/man/fitted-Zelig-method.Rd b/man/fitted-Zelig-method.Rd
new file mode 100644
index 0000000..7f2110d
--- /dev/null
+++ b/man/fitted-Zelig-method.Rd
@@ -0,0 +1,17 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/model-zelig.R
+\docType{methods}
+\name{fitted,Zelig-method}
+\alias{fitted,Zelig-method}
+\title{Method for extracting estimated fitted values from Zelig objects}
+\usage{
+\S4method{fitted}{Zelig}(object, ...)
+}
+\arguments{
+\item{object}{An Object of Class Zelig}
+
+\item{...}{Additional parameters to be passed to fitted}
+}
+\description{
+Method for extracting estimated fitted values from Zelig objects
+}
diff --git a/man/free1.Rd b/man/free1.Rd
new file mode 100644
index 0000000..220dfaa
--- /dev/null
+++ b/man/free1.Rd
@@ -0,0 +1,99 @@
+\name{free1}
+\alias{free1}
+\title{Freedom of Speech Data}
+\usage{data(free1)}
+\description{
+
+  Selection of individual-level survey data for freedom of speech.
+
+  }
+
+\details{ 
+
+  A table with 150 observations and 12 variables. \itemize{
+    \item{sex}{1 for men and 0 for women}
+    \item{age}{Age of respondent in years}
+    \item{educ}{Levels of education, coded as a numeric variable with
+      \itemize{
+	\item{1}{No formal education}
+	\item{2}{Less than primary school education}
+	\item{3}{Completed primary school}
+	\item{4}{Completed secondary school}
+	\item{5}{Completed high school}
+	\item{6}{Completed college}
+	\item{7}{Completed post-graduate degree}
+      }
+    }
+    \item{country}{Character strings consisting of "Oceana",
+      "Eurasia", and "Eastasia", after Orwell's \emph{1984}.}
+    \item{y}{Self assessment (see below).}
+    \item{v1-v6}{Response to vignettes (see below).}
+  }
+  Survey respondents were asked in almost the same language for a
+  self-assessment and for an assessment of several hypothetical persons
+  described by written vignettes.  The self assessment (\code{self}, in
+  the data set), "How free do you think [name/you] [is/are] to express
+  [him-her/your]self without fear of government reprisal?" was first
+  asked of the survey respondent with respect to him or herself, and
+  then after each of vignette.  The possible response categories are:  \itemize{
+    \item{1}{Completely free}
+    \item{2}{Very free}
+    \item{3}{Moderately free}
+    \item{4}{Slightly free}
+    \item{5}{Not free at all}
+  }
+  The vignettes, ordered from most free to least free, are:
+  \itemize{
+    \item{vign1}{[Kay] does not like many of the government's
+    policies. She frequently publishes her opinion in newspapers,
+    criticizing decisions by officials and calling for change. She sees
+    little reason these actions could lead to government reprisal.}
+
+    \item{vign2}{[Michael] disagrees with many of the government's
+    policies. Though he knows criticism is frowned upon, he doesn't
+    believe the government would punish someone for expressing critical
+    views. He makes his opinion known on most issues without regard to
+    who is listening.}
+
+    \item{vign3}{[Bob] has political views at odds with the
+    government. He has heard of people occasionally being arrested for
+    speaking out against the government, and government leaders
+    sometimes make political speeches condemning those who criticize. He
+    sometimes writes letters to newspapers about politics, but he is
+    careful not to use his real name.}
+
+    \item{vign4}{[Connie] does not like the government's stance on many
+    issues. She has a friend who was arrested for being too openly
+    critical of governmental leaders, and so she avoids voicing her
+    opinions in public places.}
+
+    \item{vign5}{[Vito] disagrees with many of the government's
+    policies, and is very careful about whom he says this to, reserving
+    his real opinions for family and close friends only. He knows
+    several men who have been taken away by government officials for
+    saying negative things in public.}
+
+    \item{vign6}{[Sonny] lives in fear of being harassed for his
+    political views. Everyone he knows who has spoken out against the
+    government has been arrested or taken away. He never says a word
+    about anything the government does, not even when he is at home
+    alone with his family. }
+  }  
+}
+
+\references{
+  \emph{WHO's World Health Survey}
+    by Lydia Bendib, Somnath Chatterji, Alena Petrakova, Ritu Sadana,
+    Joshua A. Salomon, Margie Schneider, Bedirhan Ustun, Maria
+    Villanueva
+
+  Jonathan Wand, Gary King and Olivia Lau. (2007) ``Anchors: Software for
+  Anchoring Vignettes''. \emph{Journal of Statistical Software}.  Forthcoming.
+  copy at http://wand.stanford.edu/research/anchors-jss.pdf
+
+  Gary King and Jonathan Wand.  "Comparing Incomparable Survey
+  Responses: New Tools for Anchoring Vignettes," Political Analysis, 15,
+  1 (Winter, 2007): Pp. 46-66,
+  copy at http://gking.harvard.edu/files/abs/c-abs.shtml.
+}
+\keyword{datasets}
diff --git a/man/free2.Rd b/man/free2.Rd
new file mode 100644
index 0000000..c42d03f
--- /dev/null
+++ b/man/free2.Rd
@@ -0,0 +1,99 @@
+\name{free2}
+\alias{free2}
+\title{Freedom of Speech Data}
+\usage{data(free2)}
+\description{
+
+  Selection of individual-level survey data for freedom of speech.
+
+  }
+
+\details{ 
+
+  A table with 150 observations and 12 variables. \itemize{
+    \item{sex}{1 for men and 0 for women}
+    \item{age}{Age of respondent in years}
+    \item{educ}{Levels of education, coded as a numeric variable with
+      \itemize{
+	\item{1}{No formal education}
+	\item{2}{Less than primary school education}
+	\item{3}{Completed primary school}
+	\item{4}{Completed secondary school}
+	\item{5}{Completed high school}
+	\item{6}{Completed college}
+	\item{7}{Completed post-graduate degree}
+      }
+    }
+    \item{country}{Character strings consisting of "Oceana",
+      "Eurasia", and "Eastasia", after Orwell's \emph{1984}.}
+    \item{y}{Self assessment (see below).}
+    \item{v1-v6}{Response to vignettes (see below).}
+  }
+  Survey respondents were asked in almost the same language for a
+  self-assessment and for an assessment of several hypothetical persons
+  described by written vignettes.  The self assessment (\code{self}, in
+  the data set), "How free do you think [name/you] [is/are] to express
+  [him-her/your]self without fear of government reprisal?" was first
+  asked of the survey respondent with respect to him or herself, and
+  then after each of vignette.  The possible response categories are:  \itemize{
+    \item{1}{Completely free}
+    \item{2}{Very free}
+    \item{3}{Moderately free}
+    \item{4}{Slightly free}
+    \item{5}{Not free at all}
+  }
+  The vignettes, ordered from most free to least free, are:
+  \itemize{
+    \item{vign1}{[Kay] does not like many of the government's
+    policies. She frequently publishes her opinion in newspapers,
+    criticizing decisions by officials and calling for change. She sees
+    little reason these actions could lead to government reprisal.}
+
+    \item{vign2}{[Michael] disagrees with many of the government's
+    policies. Though he knows criticism is frowned upon, he doesn't
+    believe the government would punish someone for expressing critical
+    views. He makes his opinion known on most issues without regard to
+    who is listening.}
+
+    \item{vign3}{[Bob] has political views at odds with the
+    government. He has heard of people occasionally being arrested for
+    speaking out against the government, and government leaders
+    sometimes make political speeches condemning those who criticize. He
+    sometimes writes letters to newspapers about politics, but he is
+    careful not to use his real name.}
+
+    \item{vign4}{[Connie] does not like the government's stance on many
+    issues. She has a friend who was arrested for being too openly
+    critical of governmental leaders, and so she avoids voicing her
+    opinions in public places.}
+
+    \item{vign5}{[Vito] disagrees with many of the government's
+    policies, and is very careful about whom he says this to, reserving
+    his real opinions for family and close friends only. He knows
+    several men who have been taken away by government officials for
+    saying negative things in public.}
+
+    \item{vign6}{[Sonny] lives in fear of being harassed for his
+    political views. Everyone he knows who has spoken out against the
+    government has been arrested or taken away. He never says a word
+    about anything the government does, not even when he is at home
+    alone with his family. }
+  }  
+}
+
+\references{
+  \emph{WHO's World Health Survey}
+    by Lydia Bendib, Somnath Chatterji, Alena Petrakova, Ritu Sadana,
+    Joshua A. Salomon, Margie Schneider, Bedirhan Ustun, Maria
+    Villanueva
+
+  Jonathan Wand, Gary King and Olivia Lau. (2007) ``Anchors: Software for
+  Anchoring Vignettes''. \emph{Journal of Statistical Software}.  Forthcoming.
+  copy at http://wand.stanford.edu/research/anchors-jss.pdf
+
+  Gary King and Jonathan Wand.  "Comparing Incomparable Survey
+  Responses: New Tools for Anchoring Vignettes," Political Analysis, 15,
+  1 (Winter, 2007): Pp. 46-66,
+  copy at http://gking.harvard.edu/files/abs/c-abs.shtml.
+}
+\keyword{datasets}
diff --git a/man/friendship.Rd b/man/friendship.Rd
new file mode 100644
index 0000000..8e72172
--- /dev/null
+++ b/man/friendship.Rd
@@ -0,0 +1,28 @@
+\name{friendship}
+
+\alias{friendship}
+
+\title{Simulated Example of Schoolchildren Friendship Network}
+
+\description{
+  This data set contains six sociomatrices of simulated data on friendship ties among schoolchildren.}
+
+\usage{data(friendship)}
+
+\format{
+Each variable in the dataset is a 15 by 15 matrix representing some form of social network tie held by the fictitious children. The matrices are labeled "friends", "advice", "prestige", "authority", "perpower" and "per".
+
+The sociomatrices were combined into the friendship dataset using the format.network.data function from the netglm package by Skyler Cranmer as shown in the example.
+
+}
+
+\source{fictitious}
+
+\examples{
+	\dontrun{
+friendship <- format.network.data(friends, advice, prestige, authority, perpower, per)
+}} 
+
+\keyword{datasets}
+
+
diff --git a/man/from_zelig_model.Rd b/man/from_zelig_model.Rd
new file mode 100644
index 0000000..ead363f
--- /dev/null
+++ b/man/from_zelig_model.Rd
@@ -0,0 +1,29 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/interface.R
+\name{from_zelig_model}
+\alias{from_zelig_model}
+\title{Extract the original fitted model object from a \code{zelig} estimation}
+\usage{
+from_zelig_model(obj)
+}
+\arguments{
+\item{obj}{a zelig object with an estimated model}
+}
+\description{
+Extract the original fitted model object from a \code{zelig} estimation
+}
+\details{
+Extracts the original fitted model object from a \code{zelig}
+  estimation. This can be useful for passing output to non-Zelig
+  post-estimation functions and packages such as texreg and stargazer
+  for creating well-formatted presentation document tables.
+}
+\examples{
+z5 <- zls$new()
+z5$zelig(Fertility ~ Education, data = swiss)
+from_zelig_model(z5)
+
+}
+\author{
+Christopher Gandrud
+}
diff --git a/man/get_pvalue.Rd b/man/get_pvalue.Rd
new file mode 100644
index 0000000..c27840a
--- /dev/null
+++ b/man/get_pvalue.Rd
@@ -0,0 +1,17 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/wrappers.R
+\name{get_pvalue}
+\alias{get_pvalue}
+\title{Extract p-values from a Zelig estimated model}
+\usage{
+get_pvalue(object)
+}
+\arguments{
+\item{object}{an object of class Zelig}
+}
+\description{
+Extract p-values from a Zelig estimated model
+}
+\author{
+Christopher Gandrud
+}
diff --git a/man/get_qi.Rd b/man/get_qi.Rd
new file mode 100644
index 0000000..4e23608
--- /dev/null
+++ b/man/get_qi.Rd
@@ -0,0 +1,27 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/wrappers.R
+\name{get_qi}
+\alias{get_qi}
+\title{Extract quantities of interest from a Zelig simulation}
+\usage{
+get_qi(object, qi = "ev", xvalue = "x", subset = NULL)
+}
+\arguments{
+\item{object}{an object of class Zelig}
+
+\item{qi}{character string with the name of quantity of interest desired:
+\code{"ev"} for expected values, \code{"pv"} for predicted values or
+\code{"fd"} for first differences.}
+
+\item{xvalue}{chracter string stating which of the set of values of \code{x}
+should be used for getting the quantity of interest.}
+
+\item{subset}{subset for multiply imputed data (only relevant if multiply
+imputed data is supplied in the original call.)}
+}
+\description{
+Extract quantities of interest from a Zelig simulation
+}
+\author{
+Christopher Gandrud
+}
diff --git a/man/get_se.Rd b/man/get_se.Rd
new file mode 100644
index 0000000..ade6945
--- /dev/null
+++ b/man/get_se.Rd
@@ -0,0 +1,17 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/wrappers.R
+\name{get_se}
+\alias{get_se}
+\title{Extract standard errors from a Zelig estimated model}
+\usage{
+get_se(object)
+}
+\arguments{
+\item{object}{an object of class Zelig}
+}
+\description{
+Extract standard errors from a Zelig estimated model
+}
+\author{
+Christopher Gandrud
+}
diff --git a/man/grunfeld.Rd b/man/grunfeld.Rd
new file mode 100644
index 0000000..e78232f
--- /dev/null
+++ b/man/grunfeld.Rd
@@ -0,0 +1,23 @@
+\name{grunfeld}
+
+\alias{grunfeld}
+
+\title{Simulation Data for model Seemingly Unrelated Regression (sur) that corresponds to method SUR of systemfit}
+
+\description{
+  Dataframe contains 20 annual observations from 1935 to
+  1954 of 7 variables for two firms General Electric  and Westinghouse.
+  Columns are Year; Ige and Iw = Gross investment for GE and
+  W,respectively; Fge and Fw=Market value of Firm as of
+  begin of the year; Cge and Cw= Capital stock measure as of begin of
+  the year.  
+}
+
+\usage{data(grunfeld)}
+
+\format{
+  A table containing 7 variables ("Year", "Ige", "Fge", "Cge","Iw", "Fw","Cw")
+  and 20 observations.  
+}
+
+\keyword{datasets}
diff --git a/man/hoff.Rd b/man/hoff.Rd
new file mode 100644
index 0000000..5791625
--- /dev/null
+++ b/man/hoff.Rd
@@ -0,0 +1,27 @@
+\name{hoff}
+
+\alias{hoff}
+
+\title{Social Security Expenditure Data}
+
+\description{
+  This data set contains annual social security expenditure (as percent
+  of budget lagged by two years), the
+  relative frequency of mentions social justice received in the party's
+  platform in each year, and whether the president is Republican or
+  Democrat.  
+}
+
+\usage{data(hoff)}
+
+\format{A table containing 5 variables ("year", "L2SocSec", "Just503D", "Just503R", "RGovDumy") and 36 observations.}
+
+\source{ICPSR (replication dataset s1109)}
+
+\references{
+  Gary King and Michael Laver. ``On Party Platforms, Mandates, and
+  Government Spending,'' \emph{American Political Science Review},
+  Vol. 87, No. 3 (September, 1993): pp. 744-750.
+}
+
+\keyword{datasets}
diff --git a/man/homerun.Rd b/man/homerun.Rd
new file mode 100644
index 0000000..03b03be
--- /dev/null
+++ b/man/homerun.Rd
@@ -0,0 +1,25 @@
+\name{homerun}
+\alias{homerun}
+\docType{data}
+
+\title{Sample Data on Home Runs Hit By Mark McGwire and Sammy Sosa in 1998.}
+\description{
+ Game-by-game information for the 1998 season for Mark McGwire and Sammy Sosa. Data are a subset of the dataset provided in Simonoff (1998).
+}
+\usage{data(homerun)}
+\format{
+  A data frame containing 5 variables ("gameno", "month", "homeruns", "playerstatus", "player") and 326 observations.  
+  \describe{
+    \item{\code{gameno}}{an integer variable denoting the game number}
+    \item{\code{month}}{a factor variable taking with levels "March" through "September" denoting the month of the game}
+    \item{\code{homeruns}}{an integer vector denoting the number of homeruns hit in that game for that player}
+    \item{\code{playerstatus}}{an integer vector equal to "0" if the player played in the game, and "1" if they did not.}
+    \item{\code{player}}{an  integer vector equal to "0" (McGwire) or "1" (Sosa)}
+  }
+}
+
+\source{\url{http://jse.amstat.org/v6n3/datasets.simonoff.html}}
+
+\references{Simonoff, Jeffrey S. 1998. ``Move Over, Roger Maris: Breaking Baseball's Most Famous Record.'' \emph{Journal of Statistics Education} 6(3). Data used are a subset of the data in the article.}
+
+\keyword{datasets}
diff --git a/man/immigration.Rd b/man/immigration.Rd
new file mode 100644
index 0000000..9879188
--- /dev/null
+++ b/man/immigration.Rd
@@ -0,0 +1,34 @@
+\name{immigration}
+
+\alias{immigration}
+\alias{immi1}
+\alias{immi2}
+\alias{immi3}
+\alias{immi4}
+\alias{immi5}
+
+\title{Individual Preferences Over Immigration Policy}
+
+\description{These five datasets are part of a larger set of 10 multiply
+  imputed data sets describing individual preferences toward immigration
+  policy.  Imputation was performed via Amelia.  
+}
+
+\format{
+  Each multiply-inputed data set consists of a table with 7 variables
+  ("ipip", "wage1992", "prtyid",
+  "ideol", "gender") and 2,485 observations.  For variable descriptions,
+  please refer to Scheve and
+  Slaugher, 2001.  
+}
+
+\source{National Election Survey}
+
+\references{
+ Scheve, Kenneth and Matthew Slaughter (2001). ``Labor Market Competition
+ and Individual Preferences Over Immigration Policy,'' \emph{The Review of
+ Economics and Statistics}, vol. 83, no. 1, pp. 133-145.  }
+
+\keyword{datasets}
+
+
diff --git a/man/is_length_not_1.Rd b/man/is_length_not_1.Rd
new file mode 100644
index 0000000..1416451
--- /dev/null
+++ b/man/is_length_not_1.Rd
@@ -0,0 +1,20 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/assertions.R
+\name{is_length_not_1}
+\alias{is_length_not_1}
+\title{Check if an object has a length greater than 1}
+\usage{
+is_length_not_1(x, msg = "Length is 1.", fail = TRUE)
+}
+\arguments{
+\item{x}{an object}
+
+\item{msg}{character string with the error message to return if
+\code{fail = TRUE}.}
+
+\item{fail}{logical whether to return an error if length is not greater than
+1.}
+}
+\description{
+Check if an object has a length greater than 1
+}
diff --git a/man/is_sims_present.Rd b/man/is_sims_present.Rd
new file mode 100644
index 0000000..93e9cde
--- /dev/null
+++ b/man/is_sims_present.Rd
@@ -0,0 +1,16 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/assertions.R
+\name{is_sims_present}
+\alias{is_sims_present}
+\title{Check if any simulations are present in sim.out}
+\usage{
+is_sims_present(x, fail = TRUE)
+}
+\arguments{
+\item{x}{a sim.out method}
+
+\item{fail}{logical whether to return an error if no simulations are present.}
+}
+\description{
+Check if any simulations are present in sim.out
+}
diff --git a/man/is_simsrange.Rd b/man/is_simsrange.Rd
new file mode 100644
index 0000000..2909231
--- /dev/null
+++ b/man/is_simsrange.Rd
@@ -0,0 +1,17 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/assertions.R
+\name{is_simsrange}
+\alias{is_simsrange}
+\title{Check if simulations for a range of fitted values are present in sim.out}
+\usage{
+is_simsrange(x, fail = TRUE)
+}
+\arguments{
+\item{x}{a sim.out method}
+
+\item{fail}{logical whether to return an error if simulation range is not
+present.}
+}
+\description{
+Check if simulations for a range of fitted values are present in sim.out
+}
diff --git a/man/is_simsrange1.Rd b/man/is_simsrange1.Rd
new file mode 100644
index 0000000..5a2c28c
--- /dev/null
+++ b/man/is_simsrange1.Rd
@@ -0,0 +1,17 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/assertions.R
+\name{is_simsrange1}
+\alias{is_simsrange1}
+\title{Check if simulations for a range1 of fitted values are present in sim.out}
+\usage{
+is_simsrange1(x, fail = TRUE)
+}
+\arguments{
+\item{x}{a sim.out method}
+
+\item{fail}{logical whether to return an error if simulation range is not
+present.}
+}
+\description{
+Check if simulations for a range1 of fitted values are present in sim.out
+}
diff --git a/man/is_simsx.Rd b/man/is_simsx.Rd
new file mode 100644
index 0000000..b4474c2
--- /dev/null
+++ b/man/is_simsx.Rd
@@ -0,0 +1,17 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/assertions.R
+\name{is_simsx}
+\alias{is_simsx}
+\title{Check if simulations for individual values are present in sim.out}
+\usage{
+is_simsx(x, fail = TRUE)
+}
+\arguments{
+\item{x}{a sim.out method}
+
+\item{fail}{logical whether to return an error if simulation range is not
+present.}
+}
+\description{
+Check if simulations for individual values are present in sim.out
+}
diff --git a/man/is_simsx1.Rd b/man/is_simsx1.Rd
new file mode 100644
index 0000000..2b05544
--- /dev/null
+++ b/man/is_simsx1.Rd
@@ -0,0 +1,19 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/assertions.R
+\name{is_simsx1}
+\alias{is_simsx1}
+\title{Check if simulations for individual values for x1 are present
+  in sim.out}
+\usage{
+is_simsx1(x, fail = TRUE)
+}
+\arguments{
+\item{x}{a sim.out method}
+
+\item{fail}{logical whether to return an error if simulation range is not
+present.}
+}
+\description{
+Check if simulations for individual values for x1 are present
+  in sim.out
+}
diff --git a/man/is_timeseries.Rd b/man/is_timeseries.Rd
new file mode 100644
index 0000000..b88f31b
--- /dev/null
+++ b/man/is_timeseries.Rd
@@ -0,0 +1,19 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/assertions.R
+\name{is_timeseries}
+\alias{is_timeseries}
+\title{Check if a zelig object contains a time series model}
+\usage{
+is_timeseries(x, msg = "Not a timeseries object.", fail = FALSE)
+}
+\arguments{
+\item{x}{a zelig object}
+
+\item{msg}{character string with the error message to return if
+\code{fail = TRUE}.}
+
+\item{fail}{logical whether to return an error if \code{x} is not a timeseries.}
+}
+\description{
+Check if a zelig object contains a time series model
+}
diff --git a/man/is_uninitializedField.Rd b/man/is_uninitializedField.Rd
new file mode 100644
index 0000000..f7c8c02
--- /dev/null
+++ b/man/is_uninitializedField.Rd
@@ -0,0 +1,20 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/assertions.R
+\name{is_uninitializedField}
+\alias{is_uninitializedField}
+\title{Check if uninitializedField}
+\usage{
+is_uninitializedField(x, msg = "Zelig model has not been estimated.",
+  fail = TRUE)
+}
+\arguments{
+\item{x}{a zelig.out method}
+
+\item{msg}{character string with the error message to return if
+\code{fail = TRUE}.}
+
+\item{fail}{logical whether to return an error if x uninitialzed.}
+}
+\description{
+Check if uninitializedField
+}
diff --git a/man/is_varying.Rd b/man/is_varying.Rd
new file mode 100644
index 0000000..626801d
--- /dev/null
+++ b/man/is_varying.Rd
@@ -0,0 +1,19 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/assertions.R
+\name{is_varying}
+\alias{is_varying}
+\title{Check if the values in a vector vary}
+\usage{
+is_varying(x, msg = "Vector does not vary.", fail = TRUE)
+}
+\arguments{
+\item{x}{a vector}
+
+\item{msg}{character string with the error message to return if
+\code{fail = TRUE}.}
+
+\item{fail}{logical whether to return an error if \code{x} does not vary.}
+}
+\description{
+Check if the values in a vector vary
+}
diff --git a/man/is_zelig.Rd b/man/is_zelig.Rd
new file mode 100644
index 0000000..9f490cc
--- /dev/null
+++ b/man/is_zelig.Rd
@@ -0,0 +1,16 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/assertions.R
+\name{is_zelig}
+\alias{is_zelig}
+\title{Check if is a zelig object}
+\usage{
+is_zelig(x, fail = TRUE)
+}
+\arguments{
+\item{x}{an object}
+
+\item{fail}{logical whether to return an error if x is not a Zelig object.}
+}
+\description{
+Check if is a zelig object
+}
diff --git a/man/is_zeligei.Rd b/man/is_zeligei.Rd
new file mode 100644
index 0000000..17643f1
--- /dev/null
+++ b/man/is_zeligei.Rd
@@ -0,0 +1,20 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/assertions.R
+\name{is_zeligei}
+\alias{is_zeligei}
+\title{Check if an object was created with ZeligEI}
+\usage{
+is_zeligei(x, msg = "Function is not relevant for ZeligEI objects.",
+  fail = TRUE)
+}
+\arguments{
+\item{x}{a zelig object}
+
+\item{msg}{character string with the error message to return if
+\code{fail = TRUE}.}
+
+\item{fail}{logical whether to return an error if \code{x} is not a timeseries.}
+}
+\description{
+Check if an object was created with ZeligEI
+}
diff --git a/man/klein.Rd b/man/klein.Rd
new file mode 100644
index 0000000..7c1bdfb
--- /dev/null
+++ b/man/klein.Rd
@@ -0,0 +1,25 @@
+\name{klein}
+
+\alias{klein}
+
+\title{Simulation Data for model Two-Stage Least Square (twosls) that corresponds to method 2SLS of systemfit}
+
+\description{
+  Dataframe contains annual observations of US economy from 1920 to
+  1940. The columns are, Year, C=Consumption, P=Corporate profits,
+  P1=Previous year corporate profit,Wtot=Total wage, Wp=Private wage
+  bill, Wg=Government wage bill,I=Investment,
+  K1=Previous year capital stock,X=GNP,G=Government spending, T=Taxes,
+  X1=Previous year GNP, Tm=Year-1931.
+  
+}
+
+\usage{data(klein)}
+
+\format{
+  A table containing 14 variables ("year", "C", "P", "P1","Wtot", "Wp",
+  "Wg", "I", "K1","X", "G", "T", "X1", "Tm") and 21 observations.  
+}
+\source{http://pages.stern.nyu.edu/~wgreene/Text/econometricanalysis.htm}
+
+\keyword{datasets}
diff --git a/man/kmenta.Rd b/man/kmenta.Rd
new file mode 100644
index 0000000..ef2a032
--- /dev/null
+++ b/man/kmenta.Rd
@@ -0,0 +1,24 @@
+\name{kmenta}
+
+\alias{kmenta}
+
+\title{Simulation Data for model Three-Stage Least Square (threesls) that corresponds to method 3SLS of systemfit}
+
+\description{
+  Dataframe contains 20 annual observations of a supply/demand model
+  with 5 variables. Columns are q=Food consumption per capita,
+  p=Ratio of food price to general consumer prices, 
+  d=Disposable income in contstant dollars,
+  f=Ratio of preceding year's prices received by farmers to general consumer prices,
+  a=Time index.
+   
+}
+
+\usage{data(kmenta)}
+
+\format{
+  A table containing 5 variables ("q", "p", "d", "f","a")
+  and 20 observations.  
+}
+
+\keyword{datasets}
diff --git a/man/macro.Rd b/man/macro.Rd
new file mode 100644
index 0000000..61a7878
--- /dev/null
+++ b/man/macro.Rd
@@ -0,0 +1,37 @@
+\name{macro}
+
+\alias{macro}
+
+\title{Macroeconomic Data}
+
+\description{
+  Selected macroeconomic indicators for Austria, Belgium, Canada,
+  Denmark, Finland, France, Italy, Japan, the Netherlands, Norway,
+  Sweden, the United Kingdom, the United States, and West Germany for
+  the period 1966-1990.  
+}
+
+\usage{data(macro)}
+
+\format{
+  A table containing 6 variables ("country", "year", "gdp", 
+  "unem", "capmob", and "trade") and 350 observations.
+}
+
+\source{ICPSR}
+
+\references{
+  King, Gary, Michael Tomz and Jason Wittenberg. ICPSR Publication
+  Related Archive, 1225.
+  
+  King, Gary, Michael Tomz and Jason Wittenberg (2000).
+  ``Making the Most of Statistical Analyses: Improving Interpretation and 
+  Presentation,'' \emph{American Journal of Political Science}, vol. 44,
+  pp. 341-355.
+}
+
+\keyword{datasets}
+
+
+
+
diff --git a/man/mexico.Rd b/man/mexico.Rd
new file mode 100644
index 0000000..de1b387
--- /dev/null
+++ b/man/mexico.Rd
@@ -0,0 +1,28 @@
+\name{mexico}
+
+\alias{mexico}
+
+\title{Voting Data from the 1988 Mexican Presidental Election}
+
+\description{
+  This dataset contains voting data for the 1988 Mexican presidential
+  election.  
+}
+
+\usage{data(mexico)}
+
+\format{A table containing 33 variables and 1,359 observations.}
+
+\source{ICPSR}
+
+\references{
+  King, Gary, Michael Tomz and Jason Wittenberg (2000).
+  ``Making the Most of Statistical Analyses: Improving Interpretation and 
+  Presentation,'' \emph{American Journal of Political Science}, vol. 44,
+  pp. 341-355.
+
+  King, Tomz and Wittenberg.  ICPSR Publication Related Archive, 1255.  
+}
+
+\keyword{datasets}
+
diff --git a/man/mi.Rd b/man/mi.Rd
new file mode 100644
index 0000000..6fa19e1
--- /dev/null
+++ b/man/mi.Rd
@@ -0,0 +1,18 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/utils.R
+\name{mi}
+\alias{mi}
+\title{Enables backwards compatability for preparing non-amelia imputed data sets
+for \code{zelig}.}
+\usage{
+mi(...)
+}
+\arguments{
+\item{...}{a set of \code{data.frame}'s}
+}
+\value{
+an \code{mi} object composed of a list of data frames.
+}
+\description{
+See \code{\link{to_zelig_mi}}
+}
diff --git a/man/mid.Rd b/man/mid.Rd
new file mode 100644
index 0000000..3799a24
--- /dev/null
+++ b/man/mid.Rd
@@ -0,0 +1,34 @@
+\name{mid}
+
+\alias{mid}
+
+\title{Militarized Interstate Disputes}
+
+\description{
+  A small sample from the militarized interstate disputes (MID) database.
+}
+
+\usage{data(mid)}
+
+\format{
+  A table containing 6 variables ("conflict", "major", "contig", 
+  "power", "maxdem", "mindem", and "years") and 3,126 observations.  For
+  full variable descriptions, please see King and Zeng, 2001.  
+}
+
+\source{Militarized Interstate Disputes database}
+
+\references{
+  King, Gary, and Lanche Zeng (2001).  ``Explaining Rare Events in
+  International Relations,'' \emph{International Organization}, vol. 55,
+  no. 3, pp. 693-715.  
+
+  Jones, Daniel M., Stuart A. Bremer and David Singer (1996).  ``Militarized
+  Interstate Disputes, 1816-1992: Rationale, Coding Rules, and Empirical
+  Patterns,'' \emph{Conflict Management and Peace Science}, vol. 15,
+  no. 2, pp. 163-213.  
+}
+
+\keyword{datasets}
+
+
diff --git a/man/model_lookup_df.Rd b/man/model_lookup_df.Rd
new file mode 100644
index 0000000..c9f7438
--- /dev/null
+++ b/man/model_lookup_df.Rd
@@ -0,0 +1,16 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/interface.R
+\docType{data}
+\name{model_lookup_df}
+\alias{model_lookup_df}
+\title{Instructions for how to convert non-Zelig fitted model objects to Zelig.
+Used in to_zelig}
+\format{An object of class \code{data.frame} with 9 rows and 4 columns.}
+\usage{
+model_lookup_df
+}
+\description{
+Instructions for how to convert non-Zelig fitted model objects to Zelig.
+Used in to_zelig
+}
+\keyword{datasets}
diff --git a/man/names-Zelig-method.Rd b/man/names-Zelig-method.Rd
new file mode 100644
index 0000000..3311683
--- /dev/null
+++ b/man/names-Zelig-method.Rd
@@ -0,0 +1,15 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/model-zelig.R
+\docType{methods}
+\name{names,Zelig-method}
+\alias{names,Zelig-method}
+\title{Names method for Zelig objects}
+\usage{
+\S4method{names}{Zelig}(x)
+}
+\arguments{
+\item{x}{An Object of Class Zelig}
+}
+\description{
+Names method for Zelig objects
+}
diff --git a/man/newpainters.Rd b/man/newpainters.Rd
new file mode 100644
index 0000000..0995248
--- /dev/null
+++ b/man/newpainters.Rd
@@ -0,0 +1,40 @@
+\name{newpainters}
+
+\alias{newpainters}
+
+\title{The Discretized Painter's Data of de Piles}
+
+\description{
+     The original painters data contain the subjective assessment, 
+     on a 0 to 20 integer scale, of 54 classical painters. The
+     newpainters data discretizes the subjective assessment by
+     quartiles with thresholds 25\%, 50\%, 75\%. The painters were 
+     assessed on four characteristics: composition, drawing, 
+     colour and expression.  The data is due to the Eighteenth century 
+     art critic, de Piles.
+
+}
+
+\usage{data(newpainters)}
+
+\format{A table containing 5 variables ("Composition", "Drawing", "Colour", 
+"Expression", and "School") and 54 observations.}
+
+\source{
+
+     A. J. Weekes (1986).``A Genstat Primer''. Edward Arnold.
+
+     M. Davenport and G. Studdert-Kennedy (1972). ``The statistical
+     analysis of aesthetic judgement: an exploration.'' \emph{Applied
+     Statistics}, vol. 21,  pp. 324--333.
+
+     I. T. Jolliffe (1986) ``Principal Component Analysis.'' Springer.
+}
+
+\references{
+
+     Venables, W. N. and Ripley, B. D. (2002) ``Modern Applied
+     Statistics with S,'' Fourth edition.  Springer.
+}
+
+\keyword{datasets}
diff --git a/man/or_summary.Rd b/man/or_summary.Rd
new file mode 100644
index 0000000..9008843
--- /dev/null
+++ b/man/or_summary.Rd
@@ -0,0 +1,22 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/utils.R
+\name{or_summary}
+\alias{or_summary}
+\title{Find odds ratios for coefficients and standard errors
+for glm.summary class objects}
+\usage{
+or_summary(obj, label_mod_coef = "(OR)", label_mod_se = "(OR)")
+}
+\arguments{
+\item{obj}{a \code{glm.summary} class object}
+
+\item{label_mod_coef}{character string for how to modify the coefficient
+label.}
+
+\item{label_mod_se}{character string for how to modify the standard error
+label.}
+}
+\description{
+Find odds ratios for coefficients and standard errors
+for glm.summary class objects
+}
diff --git a/man/p_pull.Rd b/man/p_pull.Rd
new file mode 100644
index 0000000..13f14a8
--- /dev/null
+++ b/man/p_pull.Rd
@@ -0,0 +1,15 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/utils.R
+\name{p_pull}
+\alias{p_pull}
+\title{Extract p-values from a fitted model object}
+\usage{
+p_pull(x)
+}
+\arguments{
+\item{x}{a fitted Zelig object}
+}
+\description{
+Extract p-values from a fitted model object
+}
+\keyword{internal}
diff --git a/man/plot-Zelig-ANY-method.Rd b/man/plot-Zelig-ANY-method.Rd
new file mode 100644
index 0000000..4e87396
--- /dev/null
+++ b/man/plot-Zelig-ANY-method.Rd
@@ -0,0 +1,19 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/model-zelig.R
+\docType{methods}
+\name{plot,Zelig,ANY-method}
+\alias{plot,Zelig,ANY-method}
+\title{Plot method for Zelig objects}
+\usage{
+\S4method{plot}{Zelig,ANY}(x, y, ...)
+}
+\arguments{
+\item{x}{An Object of Class Zelig}
+
+\item{y}{unused}
+
+\item{...}{Additional parameters to be passed to plot}
+}
+\description{
+Plot method for Zelig objects
+}
diff --git a/man/predict-Zelig-method.Rd b/man/predict-Zelig-method.Rd
new file mode 100644
index 0000000..f186f09
--- /dev/null
+++ b/man/predict-Zelig-method.Rd
@@ -0,0 +1,17 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/model-zelig.R
+\docType{methods}
+\name{predict,Zelig-method}
+\alias{predict,Zelig-method}
+\title{Method for getting predicted values from Zelig objects}
+\usage{
+\S4method{predict}{Zelig}(object, ...)
+}
+\arguments{
+\item{object}{An Object of Class Zelig}
+
+\item{...}{Additional parameters to be passed to predict}
+}
+\description{
+Method for getting predicted values from Zelig objects
+}
diff --git a/man/pv.mlogit.Rd b/man/pv.mlogit.Rd
deleted file mode 100644
index 10558c4..0000000
--- a/man/pv.mlogit.Rd
+++ /dev/null
@@ -1,19 +0,0 @@
-% Generated by roxygen2: do not edit by hand
-% Please edit documentation in R/model-mlogit.R
-\name{pv.mlogit}
-\alias{pv.mlogit}
-\title{Simulate Predicted Values}
-\usage{
-pv.mlogit(fitted, ev)
-}
-\arguments{
-\item{fitted}{a fitted model object}
-
-\item{ev}{the simulated expected values}
-}
-\value{
-a vector of simulated values
-}
-\description{
-Simulate Predicted Values
-}
diff --git a/man/qi.plot.Rd b/man/qi.plot.Rd
new file mode 100644
index 0000000..8c82501
--- /dev/null
+++ b/man/qi.plot.Rd
@@ -0,0 +1,20 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/plots.R
+\name{qi.plot}
+\alias{qi.plot}
+\title{Default Plot Design For Zelig Model QI's}
+\usage{
+qi.plot(obj, ...)
+}
+\arguments{
+\item{obj}{A reference class zelig5 object}
+
+\item{...}{Parameters to be passed to the `truehist' function which is
+implicitly called for numeric simulations}
+}
+\description{
+Default Plot Design For Zelig Model QI's
+}
+\author{
+James Honaker with panel layouts from Matt Owen
+}
diff --git a/man/qi_slimmer.Rd b/man/qi_slimmer.Rd
new file mode 100644
index 0000000..a487486
--- /dev/null
+++ b/man/qi_slimmer.Rd
@@ -0,0 +1,51 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/interface.R
+\name{qi_slimmer}
+\alias{qi_slimmer}
+\title{Find the median and a central interval of simulated quantity of interest
+distributions}
+\usage{
+qi_slimmer(df, qi_type = "ev", ci = 0.95)
+}
+\arguments{
+\item{df}{a tidy-formatted data frame of simulated quantities of interest
+created by \code{\link{zelig_qi_to_df}}.}
+
+\item{qi_type}{character string either \code{ev} or \code{pv} for returning the
+central intervals for the expected value or predicted value, respectively.}
+
+\item{ci}{numeric. The central interval to return, expressed on the
+\code{(0, 100]} or the equivalent \code{(0, 1]} interval.}
+}
+\description{
+Find the median and a central interval of simulated quantity of interest
+distributions
+}
+\details{
+A tidy-formatted data frame with the following columns:
+\itemize{
+\item The values fitted with \code{\link{setx}}
+\item \code{qi_ci_min}: the minimum value of the central interval specified with
+\code{ci}
+\item \code{qi_ci_median}: the median of the simulated quantity of interest
+distribution
+\item \code{qi_ci_max}: the maximum value of the central interval specified with
+\code{ci}
+}
+}
+\examples{
+library(dplyr)
+qi.central.interval <- zelig(Petal.Width ~ Petal.Length + Species,
+             data = iris, model = "ls") \%>\%
+             setx(Petal.Length = 2:4, Species = "setosa") \%>\%
+             sim() \%>\%
+             zelig_qi_to_df() \%>\%
+             qi_slimmer()
+
+}
+\seealso{
+\code{\link{zelig_qi_to_df}}
+}
+\author{
+Christopher Gandrud
+}
diff --git a/man/reduce.Rd b/man/reduce.Rd
new file mode 100644
index 0000000..0954fae
--- /dev/null
+++ b/man/reduce.Rd
@@ -0,0 +1,32 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/utils.R
+\name{reduce}
+\alias{reduce}
+\title{Calculate the reduced dataset to be used in \code{\link{setx}}}
+\usage{
+reduce(dataset, s, formula, data, avg = avg)
+}
+\arguments{
+\item{dataset}{Zelig object data, possibly split to deal with \code{by}
+argument}
+
+\item{s}{list of variables and their tentative \code{setx} values}
+
+\item{formula}{a simplified version of the Zelig object formula (typically
+with 1 on the lhs)}
+
+\item{data}{Zelig object data}
+
+\item{avg}{function of data transformations}
+}
+\value{
+a list of all the model variables either at their central tendancy or
+  their \code{setx} value
+}
+\description{
+#' This method is used internally
+}
+\author{
+Christine Choirat and Christopher Gandrud
+}
+\keyword{internal}
diff --git a/man/relogit.Rd b/man/relogit.Rd
new file mode 100644
index 0000000..433515f
--- /dev/null
+++ b/man/relogit.Rd
@@ -0,0 +1,17 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/model-relogit.R
+\name{relogit}
+\alias{relogit}
+\title{Estimation function for rare events logit models}
+\usage{
+relogit(formula, data = sys.parent(), tau = NULL, bias.correct = TRUE,
+  case.control = "prior", ...)
+}
+\description{
+Estimation function for rare events logit models
+}
+\details{
+This is intended as an internal function. Regular users should
+use \code{zelig} with \code{model = "relogit"}.
+}
+\keyword{internal}
diff --git a/man/residuals-Zelig-method.Rd b/man/residuals-Zelig-method.Rd
new file mode 100644
index 0000000..cf3d9c1
--- /dev/null
+++ b/man/residuals-Zelig-method.Rd
@@ -0,0 +1,15 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/model-zelig.R
+\docType{methods}
+\name{residuals,Zelig-method}
+\alias{residuals,Zelig-method}
+\title{Method for extracting residuals from Zelig objects}
+\usage{
+\S4method{residuals}{Zelig}(object)
+}
+\arguments{
+\item{object}{An Object of Class Zelig}
+}
+\description{
+Method for extracting residuals from Zelig objects
+}
diff --git a/man/rm_intercept.Rd b/man/rm_intercept.Rd
new file mode 100644
index 0000000..ce019fa
--- /dev/null
+++ b/man/rm_intercept.Rd
@@ -0,0 +1,17 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/utils.R
+\name{rm_intercept}
+\alias{rm_intercept}
+\title{Drop intercept columns or values from a data frame or named vector,
+  respectively}
+\usage{
+rm_intercept(x)
+}
+\arguments{
+\item{x}{a data frame or named vector}
+}
+\description{
+Drop intercept columns or values from a data frame or named vector,
+  respectively
+}
+\keyword{internal}
diff --git a/man/rocplot.Rd b/man/rocplot.Rd
new file mode 100644
index 0000000..2c08bf7
--- /dev/null
+++ b/man/rocplot.Rd
@@ -0,0 +1,68 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/plots.R
+\name{rocplot}
+\alias{rocplot}
+\title{Receiver Operator Characteristic Plots}
+\usage{
+rocplot(z1, z2,
+cutoff = seq(from=0, to=1, length=100), lty1="solid",
+lty2="dashed", lwd1=par("lwd"), lwd2=par("lwd"),
+col1=par("col"), col2=par("col"),
+main="ROC Curve",
+xlab = "Proportion of 1's Correctly Predicted",
+ylab="Proportion of 0's Correctly Predicted",
+plot = TRUE,
+...
+)
+}
+\arguments{
+\item{z1}{first model}
+
+\item{z2}{second model}
+
+\item{cutoff}{A vector of cut-off values between 0 and 1, at which to
+evaluate the proportion of 0s and 1s correctly predicted by the first and
+second model.  By default, this is 100 increments between 0 and 1
+inclusive}
+
+\item{lty1}{the line type of the first model (defaults to 'line')}
+
+\item{lty2}{the line type of the second model (defaults to 'dashed')}
+
+\item{lwd1}{the line width of the first model (defaults to 1)}
+
+\item{lwd2}{the line width of the second model (defaults to 1)}
+
+\item{col1}{the color of the first model (defaults to 'black')}
+
+\item{col2}{the color of the second model (defaults to 'black')}
+
+\item{main}{a title for the plot (defaults to "ROC Curve")}
+
+\item{xlab}{a label for the X-axis}
+
+\item{ylab}{a lavel for the Y-axis}
+
+\item{plot}{whether to generate a plot to the selected device}
+
+\item{\dots}{additional parameters to be passed to the plot}
+}
+\value{
+if plot is TRUE, rocplot simply generates a plot. Otherwise, a list
+  with the following is produced:
+  \item{roc1}{a matrix containing a vector of x-coordinates and
+    y-coordinates corresponding to the number of ones and zeros correctly
+    predicted for the first model.}
+  \item{roc2}{a matrix containing a vector of x-coordinates and
+    y-coordinates corresponding to the number of ones and zeros correctly
+    predicted for the second model.}
+  \item{area1}{the area under the first ROC curve, calculated using
+    Reimann sums.}
+  \item{area2}{the area under the second ROC curve, calculated using
+    Reimann sums.}
+}
+\description{
+The 'rocplot' command generates a receiver operator characteristic plot to
+compare the in-sample (default) or out-of-sample fit for two logit or probit
+regressions.
+}
diff --git a/man/se_pull.Rd b/man/se_pull.Rd
new file mode 100644
index 0000000..cdafff4
--- /dev/null
+++ b/man/se_pull.Rd
@@ -0,0 +1,15 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/utils.R
+\name{se_pull}
+\alias{se_pull}
+\title{Extract standard errors from a fitted model object}
+\usage{
+se_pull(x)
+}
+\arguments{
+\item{x}{a fitted Zelig object}
+}
+\description{
+Extract standard errors from a fitted model object
+}
+\keyword{internal}
diff --git a/man/seatshare.Rd b/man/seatshare.Rd
new file mode 100644
index 0000000..d869fed
--- /dev/null
+++ b/man/seatshare.Rd
@@ -0,0 +1,24 @@
+\name{seatshare}
+
+\alias{seatshare}
+
+\title{Left Party Seat Share in 11 OECD Countries}
+
+\description{
+  This data set contains time-series data of the seat shares in the lower legislative house of left leaning parties over time, as well as the level of unemployment.  Data follows the style used in Hibbs (1977).}
+
+\usage{data(seatshare)}
+
+\format{A table containing N variables ("country","year","unemp","leftseat") and 384 observations split across 11 countries.}
+
+\source{OECD data and Mackie and Rose (1991), extended to further years.}
+
+\references{
+	Douglas A. Hibbs. (1977).  \emph{Political Parties and Macroeconomic Policy}. American Political Science Review 71(4):1467-1487.
+
+	Thomas T. Mackie and Richard Rose.  (1991).  \emph{The International Almanac of Electoral History}  Macmillan: London.
+}
+
+\keyword{datasets}
+
+
diff --git a/man/setfactor.Rd b/man/setfactor.Rd
new file mode 100644
index 0000000..dd79599
--- /dev/null
+++ b/man/setfactor.Rd
@@ -0,0 +1,20 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/utils.R
+\name{setfactor}
+\alias{setfactor}
+\title{Set new value of a factor variable, checking for existing levels}
+\usage{
+setfactor(fv, v)
+}
+\arguments{
+\item{fv}{factor variable}
+
+\item{v}{value}
+}
+\value{
+a factor variable with a value \code{val} and the same levels
+}
+\description{
+Set new value of a factor variable, checking for existing levels
+}
+\keyword{internal}
diff --git a/man/setval.Rd b/man/setval.Rd
new file mode 100644
index 0000000..995635d
--- /dev/null
+++ b/man/setval.Rd
@@ -0,0 +1,20 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/utils.R
+\name{setval}
+\alias{setval}
+\title{Set new value of a variable as approrpriate to data type}
+\usage{
+setval(val, newval)
+}
+\arguments{
+\item{val}{old value}
+
+\item{newval}{new value}
+}
+\value{
+a variable of the same type with a value \code{val}
+}
+\description{
+Set new value of a variable as approrpriate to data type
+}
+\keyword{internal}
diff --git a/man/setx.Rd b/man/setx.Rd
new file mode 100644
index 0000000..abba99a
--- /dev/null
+++ b/man/setx.Rd
@@ -0,0 +1,68 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/wrappers.R
+\name{setx}
+\alias{setx}
+\title{Setting Explanatory Variable Values}
+\usage{
+setx(obj, fn = NULL, data = NULL, cond = FALSE, ...)
+}
+\arguments{
+\item{obj}{output object from \code{\link{zelig}}}
+
+\item{fn}{a list of functions to apply to the data frame}
+
+\item{data}{a new data frame used to set the values of
+explanatory variables. If \code{data = NULL} (the default), the
+data frame called in \code{\link{zelig}} is used}
+
+\item{cond}{a logical value indicating whether unconditional
+(default) or conditional (choose \code{cond = TRUE}) prediction
+should be performed. If you choose \code{cond = TRUE}, \code{setx}
+will coerce \code{fn = NULL} and ignore the additional arguments in
+\code{\dots}. If \code{cond = TRUE} and \code{data = NULL},
+\code{setx} will prompt you for a data frame.}
+
+\item{...}{user-defined values of specific variables for overwriting the
+default values set by the function \code{fn}. For example, adding
+\code{var1 = mean(data\$var1)} or \code{x1 = 12} explicitly sets the value
+of \code{x1} to 12. In addition, you may specify one explanatory variable
+as a range of values, creating one observation for every unique value in
+the range of values}
+}
+\value{
+The output is returned in a field to the Zelig object. For
+  unconditional prediction, \code{x.out} is a model matrix based
+  on the specified values for the explanatory variables. For multiple
+  analyses (i.e., when choosing the \code{by} option in \code{\link{zelig}},
+  \code{setx} returns the selected values calculated over the entire
+  data frame. If you wish to calculate values over just one subset of
+  the data frame, the 5th subset for example, you may use:
+  \code{x.out <- setx(z.out[[5]])}
+}
+\description{
+The \code{setx} function uses the variables identified in
+the \code{formula} generated by \code{zelig} and sets the values of
+the explanatory variables to the selected values. Use \code{setx}
+after \code{zelig} and before \code{sim} to simulate quantities of
+interest.
+}
+\details{
+This documentation describes the \code{setx} Zelig 4 compatibility wrapper
+function.
+}
+\examples{
+# Unconditional prediction:
+data(turnout)
+z.out <- zelig(vote ~ race + educate, model = 'logit', data = turnout)
+x.out <- setx(z.out)
+s.out <- sim(z.out, x = x.out)
+
+}
+\seealso{
+The full Zelig manual may be accessed online at
+  \url{http://docs.zeligproject.org/articles/}
+}
+\author{
+Matt Owen, Olivia Lau and Kosuke Imai
+}
+\keyword{file}
diff --git a/man/setx1.Rd b/man/setx1.Rd
new file mode 100644
index 0000000..724ea08
--- /dev/null
+++ b/man/setx1.Rd
@@ -0,0 +1,66 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/wrappers.R
+\name{setx1}
+\alias{setx1}
+\title{Setting Explanatory Variable Values for First Differences}
+\usage{
+setx1(obj, fn = NULL, data = NULL, cond = FALSE, ...)
+}
+\arguments{
+\item{obj}{output object from \code{\link{zelig}}}
+
+\item{fn}{a list of functions to apply to the data frame}
+
+\item{data}{a new data frame used to set the values of
+explanatory variables. If \code{data = NULL} (the default), the
+data frame called in \code{\link{zelig}} is used}
+
+\item{cond}{a logical value indicating whether unconditional
+(default) or conditional (choose \code{cond = TRUE}) prediction
+should be performed. If you choose \code{cond = TRUE}, \code{setx1}
+will coerce \code{fn = NULL} and ignore the additional arguments in
+\code{\dots}. If \code{cond = TRUE} and \code{data = NULL},
+\code{setx1} will prompt you for a data frame.}
+
+\item{...}{user-defined values of specific variables for overwriting the
+default values set by the function \code{fn}. For example, adding
+\code{var1 = mean(data\$var1)} or \code{x1 = 12} explicitly sets the value
+of \code{x1} to 12. In addition, you may specify one explanatory variable
+as a range of values, creating one observation for every unique value in
+the range of values}
+}
+\value{
+The output is returned in a field to the Zelig object. For
+  unconditional prediction, \code{x.out} is a model matrix based
+  on the specified values for the explanatory variables. For multiple
+  analyses (i.e., when choosing the \code{by} option in \code{\link{zelig}},
+  \code{setx1} returns the selected values calculated over the entire
+  data frame. If you wish to calculate values over just one subset of
+  the data frame, the 5th subset for example, you may use:
+  \code{x.out <- setx(z.out[[5]])}
+}
+\description{
+This documentation describes the \code{setx1} Zelig 4 compatibility wrapper
+function. The wrapper is primarily useful for setting fitted values
+for creating first differences in piped workflows.
+}
+\examples{
+library(dplyr) # contains pipe operator \%>\%
+data(turnout)
+
+# plot first differences
+zelig(Fertility ~ Education, data = swiss, model = 'ls') \%>\%
+      setx(z4, Education = 10) \%>\%
+      setx1(z4, Education = 30) \%>\%
+      sim() \%>\%
+      plot()
+
+}
+\seealso{
+The full Zelig manual may be accessed online at
+  \url{http://docs.zeligproject.org/articles/}
+}
+\author{
+Christopher Gandrud, Matt Owen, Olivia Lau, Kosuke Imai
+}
+\keyword{file}
diff --git a/man/sim.Rd b/man/sim.Rd
new file mode 100644
index 0000000..b90a8f3
--- /dev/null
+++ b/man/sim.Rd
@@ -0,0 +1,112 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/wrappers.R
+\name{sim}
+\alias{sim}
+\title{Generic Method for Computing and Organizing Simulated Quantities of Interest}
+\usage{
+sim(obj, x, x1, y = NULL, num = 1000, bootstrap = F, bootfn = NULL,
+  cond.data = NULL, ...)
+}
+\arguments{
+\item{obj}{output object from \code{zelig}}
+
+\item{x}{values of explanatory variables used for simulation,
+generated by \code{setx}. Not if ommitted, then \code{sim} will look for
+values in the reference class object}
+
+\item{x1}{optional values of explanatory variables (generated by a
+second call of \code{setx})
+        particular computations of quantities of interest}
+
+\item{y}{a parameter reserved for the computation of particular
+quantities of interest (average treatment effects). Few
+models currently support this parameter}
+
+\item{num}{an integer specifying the number of simulations to compute}
+
+\item{bootstrap}{currently unsupported}
+
+\item{bootfn}{currently unsupported}
+
+\item{cond.data}{currently unsupported}
+
+\item{...}{arguments reserved future versions of Zelig}
+}
+\value{
+The output stored in \code{s.out} varies by model. Use the
+ \code{names} function to view the output stored in \code{s.out}.
+ Common elements include:
+ \item{x}{the \code{\link{setx}} values for the explanatory variables,
+   used to calculate the quantities of interest (expected values,
+   predicted values, etc.). }
+ \item{x1}{the optional \code{\link{setx}} object used to simulate
+   first differences, and other model-specific quantities of
+   interest, such as risk-ratios.}
+ \item{call}{the options selected for \code{\link{sim}}, used to
+   replicate quantities of interest. }
+ \item{zelig.call}{the original function and options for
+   \code{\link{zelig}}, used to replicate analyses. }
+ \item{num}{the number of simulations requested. }
+ \item{par}{the parameters (coefficients, and additional
+   model-specific parameters). You may wish to use the same set of
+   simulated parameters to calculate quantities of interest rather
+   than simulating another set.}
+ \item{qi\$ev}{simulations of the expected values given the
+   model and \code{x}. }
+ \item{qi\$pr}{simulations of the predicted values given by the
+   fitted values. }
+ \item{qi\$fd}{simulations of the first differences (or risk
+   difference for binary models) for the given \code{x} and \code{x1}.
+   The difference is calculated by subtracting the expected values
+   given \code{x} from the expected values given \code{x1}. (If do not
+   specify \code{x1}, you will not get first differences or risk
+   ratios.) }
+ \item{qi\$rr}{simulations of the risk ratios for binary and
+   multinomial models. See specific models for details.}
+ \item{qi\$ate.ev}{simulations of the average expected
+   treatment effect for the treatment group, using conditional
+   prediction. Let \eqn{t_i} be a binary explanatory variable defining
+   the treatment (\eqn{t_i=1}) and control (\eqn{t_i=0}) groups. Then the
+   average expected treatment effect for the treatment group is
+   \deqn{ \frac{1}{n}\sum_{i=1}^n [ \, Y_i(t_i=1) -
+     E[Y_i(t_i=0)] \mid t_i=1 \,],}
+   where \eqn{Y_i(t_i=1)} is the value of the dependent variable for
+   observation \eqn{i} in the treatment group. Variation in the
+   simulations are due to uncertainty in simulating \eqn{E[Y_i(t_i=0)]},
+   the counterfactual expected value of \eqn{Y_i} for observations in the
+   treatment group, under the assumption that everything stays the
+   same except that the treatment indicator is switched to \eqn{t_i=0}. }
+ \item{qi\$ate.pr}{simulations of the average predicted
+   treatment effect for the treatment group, using conditional
+   prediction. Let \eqn{t_i} be a binary explanatory variable defining
+   the treatment (\eqn{t_i=1}) and control (\eqn{t_i=0}) groups. Then the
+   average predicted treatment effect for the treatment group is
+   \deqn{ \frac{1}{n}\sum_{i=1}^n [ \, Y_i(t_i=1) -
+     \widehat{Y_i(t_i=0)} \mid t_i=1 \,],}
+   where \eqn{Y_i(t_i=1)} is the value of the dependent variable for
+   observation \eqn{i} in the treatment group. Variation in the
+   simulations are due to uncertainty in simulating
+   \eqn{\widehat{Y_i(t_i=0)}}, the counterfactual predicted value of
+   \eqn{Y_i} for observations in the treatment group, under the
+   assumption that everything stays the same except that the
+   treatment indicator is switched to \eqn{t_i=0}.}
+}
+\description{
+Simulate quantities of interest from the estimated model
+output from \code{zelig()} given specified values of explanatory
+variables established in \code{setx()}. For classical \emph{maximum
+likelihood} models, \code{sim()} uses asymptotic normal
+approximation to the log-likelihood. For \emph{Bayesian models},
+Zelig simulates quantities of interest from the posterior density,
+whenever possible. For \emph{robust Bayesian models}, simulations
+are drawn from the identified class of Bayesian posteriors.
+Alternatively, you may generate quantities of interest using
+bootstrapped parameters.
+}
+\details{
+This documentation describes the \code{sim} Zelig 4 compatibility wrapper
+function.
+}
+\author{
+Christopher Gandrud, Matt Owen, Olivia Lau and Kosuke Imai
+}
diff --git a/man/simacf.Rd b/man/simacf.Rd
new file mode 100644
index 0000000..1a24d8c
--- /dev/null
+++ b/man/simacf.Rd
@@ -0,0 +1,12 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/model-arima.R
+\name{simacf}
+\alias{simacf}
+\title{Construct Autocorrelation Function from Zelig object and simulated parameters}
+\usage{
+simacf(coef, order, params, alpha = 0.5)
+}
+\description{
+Construct Autocorrelation Function from Zelig object and simulated parameters
+}
+\keyword{internal}
diff --git a/man/simulations.plot.Rd b/man/simulations.plot.Rd
new file mode 100644
index 0000000..ce9fd09
--- /dev/null
+++ b/man/simulations.plot.Rd
@@ -0,0 +1,43 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/plots.R
+\name{simulations.plot}
+\alias{simulations.plot}
+\title{Plot Quantities of Interest in a Zelig-fashion}
+\usage{
+simulations.plot(y, y1=NULL, xlab="", ylab="", main="", col=NULL, line.col=NULL,
+axisnames=TRUE)
+}
+\arguments{
+\item{y}{A matrix or vector of simulated results generated by Zelig, to be
+graphed.}
+
+\item{y1}{For comparison of two sets of simulated results at different
+choices of covariates, this should be an object of the same type and
+dimension as y.  If no comparison is to be made, this should be NULL.}
+
+\item{xlab}{Label for the x-axis.}
+
+\item{ylab}{Label for the y-axis.}
+
+\item{main}{Main plot title.}
+
+\item{col}{A vector of colors.  Colors will be used in turn as the graph is
+built for main plot objects. For nominal/categorical data, this colors
+renders as the bar color, while for numeric data it renders as the background
+color.}
+
+\item{line.col}{A vector of colors.  Colors will be used in turn as the graph is
+built for line color shading of plot objects.}
+
+\item{axisnames}{a character-vector, specifying the names of the axes}
+}
+\value{
+nothing
+}
+\description{
+Various graph generation for different common types of simulated results from
+Zelig
+}
+\author{
+James Honaker
+}
diff --git a/man/sna.ex.Rd b/man/sna.ex.Rd
new file mode 100644
index 0000000..03e9210
--- /dev/null
+++ b/man/sna.ex.Rd
@@ -0,0 +1,20 @@
+\name{sna.ex}
+
+\alias{sna.ex}
+
+\title{Simulated Example of Social Network Data}
+
+\description{
+  This data set contains five sociomatrices of simulated data social network data.}
+
+\usage{data(sna.ex)}
+
+\format{
+Each variable in the dataset is a 25 by 25 matrix of simulated social network data. The matrices are labeled "Var1", "Var2", "Var3", "Var4", and "Var5".
+}
+
+\source{fictitious}
+
+\keyword{datasets}
+
+
diff --git a/man/stat.Rd b/man/stat.Rd
new file mode 100644
index 0000000..a876e5a
--- /dev/null
+++ b/man/stat.Rd
@@ -0,0 +1,23 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/utils.R
+\name{stat}
+\alias{stat}
+\title{Pass Quantities of Interest to Appropriate Summary Function}
+\usage{
+stat(qi, num)
+}
+\arguments{
+\item{qi}{quantity of interest (e.g., estimated value or predicted value)}
+
+\item{num}{number of simulations}
+}
+\value{
+a formatted qi
+}
+\description{
+Pass Quantities of Interest to Appropriate Summary Function
+}
+\author{
+Christine Choirat
+}
+\keyword{internal}
diff --git a/man/statlevel.Rd b/man/statlevel.Rd
new file mode 100644
index 0000000..a3c3943
--- /dev/null
+++ b/man/statlevel.Rd
@@ -0,0 +1,23 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/utils.R
+\name{statlevel}
+\alias{statlevel}
+\title{Describe Here}
+\usage{
+statlevel(qi, num)
+}
+\arguments{
+\item{qi}{quantity of interest in the discrete case}
+
+\item{num}{number of simulations}
+}
+\value{
+a formatted quantity of interest
+}
+\description{
+Describe Here
+}
+\author{
+Christine Choirat
+}
+\keyword{internal}
diff --git a/man/statmat.Rd b/man/statmat.Rd
new file mode 100644
index 0000000..5b89ecd
--- /dev/null
+++ b/man/statmat.Rd
@@ -0,0 +1,21 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/utils.R
+\name{statmat}
+\alias{statmat}
+\title{Create QI summary matrix}
+\usage{
+statmat(qi)
+}
+\arguments{
+\item{qi}{quantity of interest in the discrete case}
+}
+\value{
+a formatted qi
+}
+\description{
+Create QI summary matrix
+}
+\author{
+Christine Choirat
+}
+\keyword{internal}
diff --git a/man/strip_package_name.Rd b/man/strip_package_name.Rd
new file mode 100644
index 0000000..d1a31fe
--- /dev/null
+++ b/man/strip_package_name.Rd
@@ -0,0 +1,15 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/utils.R
+\name{strip_package_name}
+\alias{strip_package_name}
+\title{Remove package names from fitted model object calls.}
+\usage{
+strip_package_name(x)
+}
+\arguments{
+\item{x}{a fitted model object result}
+}
+\description{
+Enables \code{\link{from_zelig_model}} output to work with stargazer.
+}
+\keyword{internal}
diff --git a/man/summary-Zelig-method.Rd b/man/summary-Zelig-method.Rd
new file mode 100644
index 0000000..1abcb6a
--- /dev/null
+++ b/man/summary-Zelig-method.Rd
@@ -0,0 +1,17 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/model-zelig.R
+\docType{methods}
+\name{summary,Zelig-method}
+\alias{summary,Zelig-method}
+\title{Summary method for Zelig objects}
+\usage{
+\S4method{summary}{Zelig}(object, ...)
+}
+\arguments{
+\item{object}{An Object of Class Zelig}
+
+\item{...}{Additional parameters to be passed to summary}
+}
+\description{
+Summary method for Zelig objects
+}
diff --git a/man/summary.Arima.Rd b/man/summary.Arima.Rd
new file mode 100644
index 0000000..c60a95c
--- /dev/null
+++ b/man/summary.Arima.Rd
@@ -0,0 +1,19 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/model-timeseries.R
+\name{summary.Arima}
+\alias{summary.Arima}
+\title{Summary of an object of class Arima}
+\usage{
+\method{summary}{Arima}(object, ...)
+}
+\arguments{
+\item{object}{An object of class Arima}
+
+\item{...}{Additional parameters}
+}
+\value{
+The original object
+}
+\description{
+Summary of an object of class Arima
+}
diff --git a/man/swiss.Rd b/man/swiss.Rd
new file mode 100644
index 0000000..ab88b9a
--- /dev/null
+++ b/man/swiss.Rd
@@ -0,0 +1,51 @@
+\name{swiss}
+
+\alias{swiss}
+
+\title{Swiss Fertility and Socioeconomic Indicators (1888) Data}
+
+\description{
+   Standardized fertility measure and socio-economic indicators for
+     each of 47 French-speaking provinces of Switzerland at about 1888.
+}
+
+\usage{data(swiss)}
+
+\format{
+      A data frame with 47 observations on 6 variables, each of which
+      is in percent, i.e., in [0,100].
+      
+       [,1]  Fertility         Ig, "common standardized fertility measure"
+       [,2]  Agriculture       % of males involved in agriculture as occupation
+       [,3]  Examination       % "draftees" receiving highest mark on army exami
+nation
+       [,4]  Education         % education beyond primary school for "draftees".
+       [,5]  Catholic          % catholic (as opposed to "protestant").
+       [,6]  Infant.Mortality  live births who live less than 1 year.
+
+     All variables but 'Fert' give proportions of the population.
+}
+
+\source{
+
+
+  Project "16P5", pages 549-551 in
+  
+  Mosteller, F. and Tukey, J. W. (1977) ``Data Analysis and
+  Regression: A Second Course in Statistics''. Addison-Wesley,
+  Reading Mass.
+
+  indicating their source as "Data used by permission of Franice van
+  de Walle. Office of Population Research, Princeton University,
+  1976.  Unpublished data assembled under NICHD contract number No
+  1-HD-O-2077."
+  }
+
+ \references{
+   Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) ``The New S
+   Language''. Wadsworth & Brooks/Cole.
+
+}
+
+
+\keyword{datasets}
diff --git a/man/table.levels.Rd b/man/table.levels.Rd
new file mode 100644
index 0000000..36e75b7
--- /dev/null
+++ b/man/table.levels.Rd
@@ -0,0 +1,30 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/utils.R
+\name{table.levels}
+\alias{table.levels}
+\title{Create a table, but ensure that the correct
+columns exist. In particular, this allows for
+entires with zero as a value, which is not
+the default for standard tables}
+\usage{
+table.levels(x, levels, ...)
+}
+\arguments{
+\item{x}{a vector}
+
+\item{levels}{a vector of levels}
+
+\item{...}{parameters for table}
+}
+\value{
+a table
+}
+\description{
+Create a table, but ensure that the correct
+columns exist. In particular, this allows for
+entires with zero as a value, which is not
+the default for standard tables
+}
+\author{
+Matt Owen
+}
diff --git a/man/to_zelig.Rd b/man/to_zelig.Rd
new file mode 100644
index 0000000..74060fa
--- /dev/null
+++ b/man/to_zelig.Rd
@@ -0,0 +1,28 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/interface.R
+\name{to_zelig}
+\alias{to_zelig}
+\title{Coerce a non-Zelig fitted model object to a Zelig class object}
+\usage{
+to_zelig(obj)
+}
+\arguments{
+\item{obj}{a fitted model object fitted using \code{lm} and many using
+\code{glm}. Note: more intended in future Zelig releases.}
+}
+\description{
+Coerce a non-Zelig fitted model object to a Zelig class object
+}
+\examples{
+library(dplyr)
+lm.out <- lm(Fertility ~ Education, data = swiss)
+
+z.out <- to_zelig(lm.out)
+
+# to_zelig called from within setx
+setx(z.out) \%>\% sim() \%>\% plot()
+
+}
+\author{
+Christopher Gandrud and Ista Zhan
+}
diff --git a/man/to_zelig_mi.Rd b/man/to_zelig_mi.Rd
new file mode 100644
index 0000000..0d7feb7
--- /dev/null
+++ b/man/to_zelig_mi.Rd
@@ -0,0 +1,41 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/utils.R
+\name{to_zelig_mi}
+\alias{to_zelig_mi}
+\title{Bundle Multiply Imputed Data Sets into an Object for Zelig}
+\usage{
+to_zelig_mi(...)
+}
+\arguments{
+\item{...}{a set of \code{data.frame}'s or a single list of \code{data.frame}'s}
+}
+\value{
+an \code{mi} object composed of a list of data frames.
+}
+\description{
+This object prepares multiply imputed data sets so they can be used by
+  \code{zelig}.
+}
+\note{
+This function creates a list of \code{data.frame} objects, which
+  resembles the storage of imputed data sets in the \code{amelia} object.
+}
+\examples{
+# create datasets
+n <- 100
+x1 <- runif(n)
+x2 <- runif(n)
+y <- rnorm(n)
+data.1 <- data.frame(y = y, x = x1)
+data.2 <- data.frame(y = y, x = x2)
+
+# merge datasets into one object as if imputed datasets
+
+mi.out <- to_zelig_mi(data.1, data.2)
+
+# pass object in place of data argument
+z.out <- zelig(y ~ x, model = "ls", data = mi.out)
+}
+\author{
+Matt Owen, James Honaker, and Christopher Gandrud
+}
diff --git a/man/tobin.Rd b/man/tobin.Rd
new file mode 100644
index 0000000..606e263
--- /dev/null
+++ b/man/tobin.Rd
@@ -0,0 +1,29 @@
+\name{tobin}
+
+\alias{tobin}
+
+\title{Tobin's Tobit Data}
+
+\description{
+	Economists fit a parametric censored data model called the
+     `tobit'. These data are from Tobin's original paper.
+}
+
+\usage{data(tobin)}
+
+\format{
+     A data frame with 20 observations on the following 3 variables.
+
+     durable: Durable goods purchase
+
+     age: Age in years
+
+     quant: Liquidity ratio (x 1000)
+}
+
+\source{
+   J. Tobin, Estimation of relationships for limited dependent
+     variables, Econometrica, v26, 24-36, 1958.
+   }
+
+\keyword{datasets}
diff --git a/man/transformer.Rd b/man/transformer.Rd
new file mode 100644
index 0000000..620908c
--- /dev/null
+++ b/man/transformer.Rd
@@ -0,0 +1,31 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/utils.R
+\name{transformer}
+\alias{transformer}
+\title{Conduct variable transformations called inside a \code{zelig} call}
+\usage{
+transformer(formula, data, FUN = "log", check, f_out, d_out)
+}
+\arguments{
+\item{formula}{model formulae}
+
+\item{data}{data frame used in \code{formula}}
+
+\item{FUN}{character string of the transformation function. Currently
+supports \code{factor} and \code{log}.}
+
+\item{check}{logical whether to just check if a formula contains an
+internally called transformation and return \code{TRUE} or \code{FALSE}}
+
+\item{f_out}{logical whether to return the converted formula}
+
+\item{d_out}{logical whether to return the converted data frame. Note:
+\code{f_out} must be missing}
+}
+\description{
+Conduct variable transformations called inside a \code{zelig} call
+}
+\author{
+Christopher Gandrud
+}
+\keyword{internal}
diff --git a/man/turnout.Rd b/man/turnout.Rd
new file mode 100644
index 0000000..3d9c30a
--- /dev/null
+++ b/man/turnout.Rd
@@ -0,0 +1,28 @@
+\name{turnout}
+
+\alias{turnout}
+
+\title{Turnout Data Set from the National Election Survey}
+
+\description{
+  This data set contains individual-level turnout data. It pools several
+  American National Election Surveys conducted during the 1992 presidential
+  election year.  Only the first 2,000 observations (from a total of 15,837 
+  observations) are included in the sample data.  
+}
+
+\usage{data(turnout)}
+
+\format{A table containing 5 variables ("race", "age", "educate", 
+"income", and "vote") and 2,000 observations.}
+
+\source{National Election Survey}
+
+\references{
+  King, Gary, Michael Tomz, Jason Wittenberg (2000).
+  ``Making the Most of Statistical Analyses: Improving Interpretation and 
+  Presentation,'' \emph{American Journal of Political Science}, vol. 44,
+  pp.341--355.
+}
+
+\keyword{datasets}
diff --git a/man/vcov-Zelig-method.Rd b/man/vcov-Zelig-method.Rd
new file mode 100644
index 0000000..2e05293
--- /dev/null
+++ b/man/vcov-Zelig-method.Rd
@@ -0,0 +1,15 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/model-zelig.R
+\docType{methods}
+\name{vcov,Zelig-method}
+\alias{vcov,Zelig-method}
+\title{Variance-covariance method for Zelig objects}
+\usage{
+\S4method{vcov}{Zelig}(object)
+}
+\arguments{
+\item{object}{An Object of Class Zelig}
+}
+\description{
+Variance-covariance method for Zelig objects
+}
diff --git a/man/vcov_gee.Rd b/man/vcov_gee.Rd
new file mode 100644
index 0000000..ed1718e
--- /dev/null
+++ b/man/vcov_gee.Rd
@@ -0,0 +1,14 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/utils.R
+\name{vcov_gee}
+\alias{vcov_gee}
+\title{Find vcov for GEE models}
+\usage{
+vcov_gee(obj)
+}
+\arguments{
+\item{obj}{a \code{geeglm} class object.}
+}
+\description{
+Find vcov for GEE models
+}
diff --git a/man/vcov_rq.Rd b/man/vcov_rq.Rd
new file mode 100644
index 0000000..01381bd
--- /dev/null
+++ b/man/vcov_rq.Rd
@@ -0,0 +1,14 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/utils.R
+\name{vcov_rq}
+\alias{vcov_rq}
+\title{Find vcov for quantile regression models}
+\usage{
+vcov_rq(obj)
+}
+\arguments{
+\item{obj}{a \code{rq} class object.}
+}
+\description{
+Find vcov for quantile regression models
+}
diff --git a/man/voteincome.Rd b/man/voteincome.Rd
new file mode 100644
index 0000000..0bb9e06
--- /dev/null
+++ b/man/voteincome.Rd
@@ -0,0 +1,27 @@
+\name{voteincome}
+\alias{voteincome}
+\docType{data}
+
+\title{Sample Turnout and Demographic Data from the 2000 Current Population Survey}
+\description{
+ This data set contains turnout and demographic data from a sample of respondents to the 2000 Current Population Survey (CPS). The states represented are South Carolina and Arkansas. The data represent only a sample and results from this example should not be used in publication.
+}
+\usage{data(voteincome)}
+\format{
+  A data frame containing 7 variables ("state", "year", "vote", "income", "education", "age", "female") and 1500 observations.
+  \describe{
+    \item{\code{state}}{a factor variable with levels equal to "AR" (Arkansas) and "SC" (South Carolina)}
+    \item{\code{year}}{an integer vector}
+    \item{\code{vote}}{an integer vector taking on values "1" (Voted) and "0" (Did Not Vote)}
+    \item{\code{income}}{an integer vector ranging from "4" (Less than \$5000) to "17" (Greater than \$75000) denoting family income. See the CPS codebook for more information on variable coding}
+    \item{\code{education}}{an  integer vector ranging from "1" (Less than High School Education) to "4" (More than a College Education). See the CPS codebook for more information on variable coding}
+    \item{\code{age}}{an integer vector ranging from "18" to "85"}
+    \item{\code{female}}{an integer vector taking on values "1" (Female) and "0" (Male)}
+}
+}
+
+\source{Census Bureau Current Population Survey}
+
+\references{\url{https://www.census.gov/programs-surveys/cps.html}}
+
+\keyword{datasets}
diff --git a/man/zelig.Rd b/man/zelig.Rd
new file mode 100644
index 0000000..7d17c91
--- /dev/null
+++ b/man/zelig.Rd
@@ -0,0 +1,85 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/wrappers.R
+\name{zelig}
+\alias{zelig}
+\title{Estimating a Statistical Model}
+\usage{
+zelig(formula, model, data, ..., by = NULL, cite = TRUE)
+}
+\arguments{
+\item{formula}{a symbolic representation of the model to be
+estimated, in the form \code{y \~\, x1 + x2}, where \code{y} is the
+dependent variable and \code{x1} and \code{x2} are the explanatory
+variables, and \code{y}, \code{x1}, and \code{x2} are contained in the
+same dataset. (You may include more than two explanatory variables,
+of course.) The \code{+} symbol means ``inclusion'' not
+``addition.'' You may also include interaction terms and main
+effects in the form \code{x1*x2} without computing them in prior
+steps; \code{I(x1*x2)} to include only the interaction term and
+exclude the main effects; and quadratic terms in the form
+\code{I(x1^2)}.}
+
+\item{model}{the name of a statistical model to estimate.
+For a list of other supported models and their documentation see:
+\url{http://docs.zeligproject.org/articles/}.}
+
+\item{data}{the name of a data frame containing the variables
+referenced in the formula or a list of multiply imputed data frames
+each having the same variable names and row numbers (created by
+\code{Amelia} or \code{\link{to_zelig_mi}}).}
+
+\item{...}{additional arguments passed to \code{zelig},
+relevant for the model to be estimated.}
+
+\item{by}{a factor variable contained in \code{data}. If supplied,
+\code{zelig} will subset
+the data frame based on the levels in the \code{by} variable, and
+estimate a model for each subset. This can save a considerable amount of
+effort. For example, to run the same model on all fifty states, you could
+use: \code{z.out <- zelig(y ~ x1 + x2, data = mydata, model = 'ls',
+by = 'state')} You may also use \code{by} to run models using MatchIt
+subclasses.}
+
+\item{cite}{If is set to 'TRUE' (default), the model citation will be printed
+to the console.}
+}
+\value{
+Depending on the class of model selected, \code{zelig} will return
+  an object with elements including \code{coefficients}, \code{residuals},
+  and \code{formula} which may be summarized using
+  \code{summary(z.out)} or individually extracted using, for example,
+  \code{coef(z.out)}. See
+  \url{http://docs.zeligproject.org/articles/getters.html} for a list of
+  functions to extract model components. You can also extract whole fitted
+  model objects using \code{\link{from_zelig_model}}.
+}
+\description{
+The zelig function estimates a variety of statistical
+models. Use \code{zelig} output with \code{setx} and \code{sim} to compute
+quantities of interest, such as predicted probabilities, expected values, and
+first differences, along with the associated measures of uncertainty
+(standard errors and confidence intervals).
+}
+\details{
+This documentation describes the \code{zelig} Zelig 4 compatibility wrapper
+function.
+
+
+Additional parameters avaialable to many models include:
+\itemize{
+  \item weights: vector of weight values or a name of a variable in the dataset
+  by which to weight the model. For more information see:
+  \url{http://docs.zeligproject.org/articles/weights.html}.
+  \item bootstrap: logical or numeric. If \code{FALSE} don't use bootstraps to
+  robustly estimate uncertainty around model parameters due to sampling error.
+  If an integer is supplied, the number of boostraps to run.
+  For more information see:
+  \url{http://docs.zeligproject.org/articles/bootstraps.html}.
+}
+}
+\seealso{
+\url{http://docs.zeligproject.org/articles/}
+}
+\author{
+Matt Owen, Kosuke Imai, Olivia Lau, and Gary King
+}
diff --git a/man/zeligACFplot.Rd b/man/zeligACFplot.Rd
new file mode 100644
index 0000000..eb8339c
--- /dev/null
+++ b/man/zeligACFplot.Rd
@@ -0,0 +1,15 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/plots.R
+\name{zeligACFplot}
+\alias{zeligACFplot}
+\title{Plot Autocorrelation Function from Zelig QI object}
+\usage{
+zeligACFplot(z, omitzero = FALSE, barcol = "black", epsilon = 0.1,
+  col = NULL, main = "Autocorrelation Function", xlab = "Period",
+  ylab = "Correlation of Present Shock with Future Outcomes", ylim = NULL,
+  ...)
+}
+\description{
+Plot Autocorrelation Function from Zelig QI object
+}
+\keyword{internal}
diff --git a/man/zeligARMAbreakforecaster.Rd b/man/zeligARMAbreakforecaster.Rd
new file mode 100644
index 0000000..b99bb70
--- /dev/null
+++ b/man/zeligARMAbreakforecaster.Rd
@@ -0,0 +1,13 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/model-arima.R
+\name{zeligARMAbreakforecaster}
+\alias{zeligARMAbreakforecaster}
+\title{Construct Simulated Series with Internal Discontinuity in X}
+\usage{
+zeligARMAbreakforecaster(y.init = NULL, x, x1, simparam, order, sd, t1 = 5,
+  t2 = 10)
+}
+\description{
+Construct Simulated Series with Internal Discontinuity in X
+}
+\keyword{internal}
diff --git a/man/zeligARMAlongrun.Rd b/man/zeligARMAlongrun.Rd
new file mode 100644
index 0000000..d8c7d80
--- /dev/null
+++ b/man/zeligARMAlongrun.Rd
@@ -0,0 +1,13 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/model-arima.R
+\name{zeligARMAlongrun}
+\alias{zeligARMAlongrun}
+\title{Calculate the Long Run Exquilibrium for Fixed X}
+\usage{
+zeligARMAlongrun(y.init = NULL, x, simparam, order, sd, tol = NULL,
+  burnin = 20)
+}
+\description{
+Calculate the Long Run Exquilibrium for Fixed X
+}
+\keyword{internal}
diff --git a/man/zeligARMAnextstep.Rd b/man/zeligARMAnextstep.Rd
new file mode 100644
index 0000000..4b6549f
--- /dev/null
+++ b/man/zeligARMAnextstep.Rd
@@ -0,0 +1,13 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/model-arima.R
+\name{zeligARMAnextstep}
+\alias{zeligARMAnextstep}
+\title{Construct Simulated Next Step in Dynamic Series}
+\usage{
+zeligARMAnextstep(yseries = NULL, xseries, wseries = NULL, beta,
+  ar = NULL, i = NULL, ma = NULL, sd)
+}
+\description{
+Construct Simulated Next Step in Dynamic Series
+}
+\keyword{internal}
diff --git a/man/zeligArimaWrapper.Rd b/man/zeligArimaWrapper.Rd
new file mode 100644
index 0000000..cbb1062
--- /dev/null
+++ b/man/zeligArimaWrapper.Rd
@@ -0,0 +1,13 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/model-arima.R
+\name{zeligArimaWrapper}
+\alias{zeligArimaWrapper}
+\title{Estimation wrapper function for arima models, to easily fit with Zelig architecture}
+\usage{
+zeligArimaWrapper(formula, order = c(1, 0, 0), ..., include.mean = TRUE,
+  data)
+}
+\description{
+Estimation wrapper function for arima models, to easily fit with Zelig architecture
+}
+\keyword{internal}
diff --git a/man/zelig_mutate.Rd b/man/zelig_mutate.Rd
new file mode 100644
index 0000000..b1341af
--- /dev/null
+++ b/man/zelig_mutate.Rd
@@ -0,0 +1,17 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/utils.R
+\name{zelig_mutate}
+\alias{zelig_mutate}
+\title{Zelig Copy of plyr::mutate to avoid namespace conflict with dplyr}
+\source{
+Hadley Wickham (2011). The Split-Apply-Combine Strategy for Data
+Analysis. Journal of Statistical Software, 40(1), 1-29. URL
+\url{https://www.jstatsoft.org/v40/i01/}.
+}
+\usage{
+zelig_mutate(.data, ...)
+}
+\description{
+Zelig Copy of plyr::mutate to avoid namespace conflict with dplyr
+}
+\keyword{internal}
diff --git a/man/zelig_qi_to_df.Rd b/man/zelig_qi_to_df.Rd
new file mode 100644
index 0000000..60890df
--- /dev/null
+++ b/man/zelig_qi_to_df.Rd
@@ -0,0 +1,91 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/interface.R
+\name{zelig_qi_to_df}
+\alias{zelig_qi_to_df}
+\title{Extract simulated quantities of interest from a zelig object}
+\source{
+For a discussion of tidy data see
+\url{https://www.jstatsoft.org/article/view/v059i10}.
+}
+\usage{
+zelig_qi_to_df(obj)
+}
+\arguments{
+\item{obj}{a zelig object with simulated quantities of interest}
+}
+\description{
+Extract simulated quantities of interest from a zelig object
+}
+\details{
+A simulated quantities of interest in a tidy data formatted
+\code{data.frame}. This can be useful for creating custom plots.
+
+Each row contains a simulated value and each column contains:
+\itemize{
+\item \code{setx_value} whether the simulations are from the base \code{x} \code{setx} or the
+contrasting \code{x1} for finding first differences.
+\item The fitted values specified in \code{setx} including a \code{by} column if
+\code{by} was used in the \code{\link{zelig}} call.
+\item \code{expected_value}
+\item \code{predicted_value}
+}
+
+For multinomial reponse models, a separate column is given for the expected
+probability of each outcome in the form \code{expected_*}. Additionally, there
+a is column of the predicted outcomes (\code{predicted_value}).
+}
+\examples{
+#### QIs without first difference or range, from covariates fitted at
+## central tendencies
+z.1 <- zelig(Petal.Width ~ Petal.Length + Species, data = iris,
+             model = "ls")
+z.1 <- setx(z.1)
+z.1 <- sim(z.1)
+head(zelig_qi_to_df(z.1))
+
+#### QIs for first differences
+z.2 <- zelig(Petal.Width ~ Petal.Length + Species, data = iris,
+             model = "ls")
+z.2a <- setx(z.2, Petal.Length = 2)
+z.2b <- setx(z.2, Petal.Length = 4.4)
+z.2 <- sim(z.2, x = z.2a, x1 = z.2a)
+head(zelig_qi_to_df(z.2))
+
+#### QIs for first differences, estimated by Species
+z.3 <- zelig(Petal.Width ~ Petal.Length, by = "Species", data = iris,
+             model = "ls")
+z.3a <- setx(z.3, Petal.Length = 2)
+z.3b <- setx(z.3, Petal.Length = 4.4)
+z.3 <- sim(z.3, x = z.3a, x1 = z.3a)
+head(zelig_qi_to_df(z.3))
+
+#### QIs for a range of fitted values
+z.4 <- zelig(Petal.Width ~ Petal.Length + Species, data = iris,
+             model = "ls")
+z.4 <- setx(z.4, Petal.Length = 2:4)
+z.4 <- sim(z.4)
+head(zelig_qi_to_df(z.4))
+
+#### QIs for a range of fitted values, estimated by Species
+z.5 <- zelig(Petal.Width ~ Petal.Length, by = "Species", data = iris,
+            model = "ls")
+z.5 <- setx(z.5, Petal.Length = 2:4)
+z.5 <- sim(z.5)
+head(zelig_qi_to_df(z.5))
+
+#### QIs for two ranges of fitted values
+z.6 <- zelig(Petal.Width ~ Petal.Length + Species, data = iris,
+            model = "ls")
+z.6a <- setx(z.6, Petal.Length = 2:4, Species = "setosa")
+z.6b <- setx(z.6, Petal.Length = 2:4, Species = "virginica")
+z.6 <- sim(z.6, x = z.6a, x1 = z.6b)
+
+head(zelig_qi_to_df(z.6))
+
+}
+\seealso{
+\code{\link{qi_slimmer}}
+}
+\author{
+Christopher Gandrud
+}
diff --git a/man/zelig_setx_to_df.Rd b/man/zelig_setx_to_df.Rd
new file mode 100644
index 0000000..5ad06ff
--- /dev/null
+++ b/man/zelig_setx_to_df.Rd
@@ -0,0 +1,64 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/interface.R
+\name{zelig_setx_to_df}
+\alias{zelig_setx_to_df}
+\title{Extracted fitted values from a Zelig object with \code{setx} values}
+\usage{
+zelig_setx_to_df(obj)
+}
+\arguments{
+\item{obj}{a zelig object with simulated quantities of interest}
+}
+\description{
+Extracted fitted values from a Zelig object with \code{setx} values
+}
+\details{
+Fitted (\code{setx}) values in a tidy data formatted
+\code{data.frame}. This was designed to enable the WhatIf package's
+\code{whatif} function to extract "counterfactuals".
+}
+\examples{
+#### QIs without first difference or range, from covariates fitted at
+## central tendencies
+z.1 <- zelig(Petal.Width ~ Petal.Length + Species, data = iris,
+             model = "ls")
+z.1 <- setx(z.1)
+zelig_setx_to_df(z.1)
+
+#### QIs for first differences
+z.2 <- zelig(Petal.Width ~ Petal.Length + Species, data = iris,
+             model = "ls")
+z.2 <- setx(z.2, Petal.Length = 2)
+z.2 <- setx1(z.2, Petal.Length = 4.4)
+zelig_setx_to_df(z.2)
+
+#### QIs for first differences, estimated by Species
+z.3 <- zelig(Petal.Width ~ Petal.Length, by = "Species", data = iris,
+             model = "ls")
+z.3 <- setx(z.3, Petal.Length = 2)
+z.3 <- setx1(z.3, Petal.Length = 4.4)
+zelig_setx_to_df(z.3)
+
+#### QIs for a range of fitted values
+z.4 <- zelig(Petal.Width ~ Petal.Length + Species, data = iris,
+             model = "ls")
+z.4 <- setx(z.4, Petal.Length = 2:4)
+zelig_setx_to_df(z.4)
+
+#### QIs for a range of fitted values, estimated by Species
+z.5 <- zelig(Petal.Width ~ Petal.Length, by = "Species", data = iris,
+             model = "ls")
+z.5 <- setx(z.5, Petal.Length = 2:4)
+zelig_setx_to_df(z.5)
+
+#### QIs for two ranges of fitted values
+z.6 <- zelig(Petal.Width ~ Petal.Length + Species, data = iris,
+             model = "ls")
+z.6 <- setx(z.6, Petal.Length = 2:4, Species = "setosa")
+z.6 <- setx1(z.6, Petal.Length = 2:4, Species = "virginica")
+zelig_setx_to_df(z.6)
+
+}
+\author{
+Christopher Gandrud
+}
diff --git a/tests/testthat.R b/tests/testthat.R
index a405d5e..c33fa9a 100755
--- a/tests/testthat.R
+++ b/tests/testthat.R
@@ -1,6 +1,8 @@
+library(AER)
+library(dplyr)
+library(geepack)
+library(survey)
 library(testthat)
-library(Zelig)
-library(ZeligChoice)
 
-set.seed("123")
-test_check("ZeligChoice")
\ No newline at end of file
+set.seed(123)
+test_check("Zelig")
diff --git a/tests/testthat/test-amelia.R b/tests/testthat/test-amelia.R
new file mode 100644
index 0000000..40f0fab
--- /dev/null
+++ b/tests/testthat/test-amelia.R
@@ -0,0 +1,23 @@
+# REQUIRE TEST for Amelia integration, no-transformations ----------------------
+
+test_that('REQUIRE TEST for Amelia integration, no-transformations', {
+    library(Amelia)
+
+    data(africa)
+    a.out <- amelia(x = africa, cs = "country", ts = "year", logs = "gdp_pc")
+    z.out <- zelig(gdp_pc ~ trade + civlib, model = "ls", data = a.out)
+    z.set <- setx(z.out)
+    z.sim <- sim(z.set)
+    expect_equal(mean(z.sim$get_qi()), 1000, tolerance = 100)
+})
+
+
+test_that('REQUIRE TEST for Amelia integration, log-transformation', {
+    library(Amelia)
+
+    data(africa)
+    a.out <- amelia(x = africa, cs = "country", ts = "year", logs = "gdp_pc")
+    z.out <- zelig(gdp_pc ~ trade + civlib, model = "ls", data = a.out)
+    z.outl <- zelig(gdp_pc ~ log(trade) + civlib, model = "ls", data = a.out)
+    expect_false(coef(z.out)[[1]][2] == coef(z.outl)[[1]][2])
+})
diff --git a/tests/testthat/test-arima.R b/tests/testthat/test-arima.R
new file mode 100644
index 0000000..d01ec4b
--- /dev/null
+++ b/tests/testthat/test-arima.R
@@ -0,0 +1,174 @@
+# REQUIRE TEST arima Monte Carlo -----------------------------------------------
+
+## Need to implement ##
+
+# REQUIRE TEST arima successful estimation -------------------------------------
+test_that('REQUIRE TEST arima successful estimation', {
+    data(seatshare)
+    ts <- zarima$new()
+
+    ## NEEDS a better test, possibly once get_coef has been implemented for arima
+    expect_error(
+    ts$zelig(unemp ~ leftseat, order = c(1,0,1), ts = "year", cs = "country",
+              data = seatshare),
+              NA)
+})
+
+# FAIL TEST arima fails if DV does not vary ------------------------------------
+test_that('FAIL TEST arima fails if DV does not vary', {
+    no_vary_df <- data.frame(country = c(rep("A", 5), rep("B", 5)),
+                             year = c(1:5, 1:5),
+                             y = c(rep(1:5), rep(2, 5)),
+                         x = c(1, 3, -1, NA, 1, NA, 1, 2, NA, 5))
+   # a.out <- amelia(x = no_vary_df, cs = "country", ts = "year")
+
+    zts <- zarima$new()
+    expect_error(
+        zts$zelig(y ~ x, ts = 'year', cs = 'country', order = c(1, 0, 1),
+            data = no_vary_df),
+            'Dependent variable does not vary for at least one of the cases.')
+})
+
+
+# FAIL TEST arima models ------------------------------------
+test_that('REQUIRE TEST arima models', {
+
+    n.obs <- 2000
+    x <- rnorm(n=n.obs)
+    z <- rnorm(n=n.obs)
+    t <- 1:n.obs
+    r <- rep(c(1,2),n.obs/2)
+    beta <- 1
+    phi <-0.3
+
+    y <- rep(NA,n.obs)
+    y[1]<-beta*x[1] + rnorm(1)
+    for(i in 2:n.obs){
+        y[i] <- phi*y[i-1] + beta*x[i] + rnorm(n=1, mean=0, sd=0.2)
+    }
+
+    mydata <- data.frame(y,x,z,t,r)
+    mydata2 <- rbind(mydata[10:n.obs,],mydata[1:9,])    # reorder dataset
+
+    # check ar model
+    zj <- zar$new()
+    zj$zelig(y~x + z , data=mydata, ts="t")
+    expect_equivalent(length(zj$get_coef()[[1]]), 4)
+
+    # check ma model
+    zj <- zma$new()
+    zj$zelig(y~x + z , data=mydata, ts="t")
+    expect_equivalent(length(zj$get_coef()[[1]]), 4)
+
+    # check ar-2, ma-1 model
+    zj <- zarima$new()
+    zj$zelig(y~x + z , order=c(2,0,1), data=mydata, ts="t")
+    expect_equivalent(length(zj$get_coef()[[1]]), 6)
+
+    # check integration
+    zj <- zarima$new()
+    zj$zelig(y~x + z , order=c(2,1,1), data=mydata, ts="t")
+    expect_equivalent(length(zj$get_coef()[[1]]), 5)
+
+    # check obervations out of time order
+    zj <- zarima$new()
+    zj$zelig(y~x + z -1, order=c(2,0,1), data=mydata2, ts="t")
+    expect_equivalent(length(zj$get_coef()[[1]]), 5)
+
+    zj$setx()
+    zj$setx1(x=2)
+    zj$sim()
+
+    # ACF plot
+
+    myorder <- eval(zj$zelig.call$order)
+    mycoef <- coef(zj$zelig.out$z.out[[1]])
+    myparams <- zj$simparam$simparam[[1]]
+
+    test <- Zelig:::simacf(coef=mycoef, order=myorder, params=myparams, alpha = 0.5)
+
+    expect_true(is.null(zeligACFplot(test, omitzero=TRUE)))
+
+    # plots
+
+    expect_true(is.null(ci.plot(zj, qi="pvseries.shock")))
+    expect_true(is.null(ci.plot(zj, qi="pvseries.innovation")))
+    expect_true(is.null(plot(zj)))
+
+})
+
+# REQUIRE TEST ensure that the workflow can be completed using the
+# Zelig 5 wrappers
+test_that("REQUIRE TEST timeseries reference class wrappers", {
+    data(seatshare)
+    subset <- seatshare[seatshare$country == "UNITED KINGDOM",]
+    expect_error(ts.out <- zelig(unemp ~ leftseat, data = subset,
+                                 model = "arima", order = c(2, 0, 1)), NA)
+    expect_error(x.out <- setx(ts.out, leftseat = 0.75), NA)
+    expect_error(s.out <- sim(x.out), NA)
+    expect_error(s.out <- plot(s.out), NA)
+
+    expect_error(x.out <- setx1(x.out, leftseat = 0.25), NA)
+    expect_error(s.out <- sim(x.out), NA)
+    expect_error(s.out <- plot(s.out), NA)
+})
+
+# REQUIRE TEST to ensure that summary works with arima with sim ----------------
+test_that("REQUIRE TEST to ensure that summary works with arima with sim", {
+    data(seatshare)
+    subset <- seatshare[seatshare$country == "UNITED KINGDOM",]
+    s.out <- zelig(unemp ~ leftseat, data = subset, model = "arima",
+                   order = c(2,0,1)) %>%
+        setx(leftseat = 0.25) %>%
+        sim()
+    expect_error(summary(s.out), NA)
+})
+
+
+# FAILURE TEST cs ts by with timeseries ----------------------------------------
+test_that("FAILURE TEST cs ts by with timeseries", {
+    data(seatshare)
+    ts <- zarima$new()
+
+    expect_error(
+        ts$zelig(unemp ~ leftseat, order = c(1,0,1), ts = "year",
+                 cs = "country", by = "TEST",
+                 data = seatshare),
+        "cs and by are equivalent for this model. Only one needs to be specified."
+        )
+
+    expect_error(
+        ts$zelig(unemp ~ leftseat, order = c(1,0,1), cs = "country",
+                 data = seatshare),
+        "ts must be specified if cs is specified."
+    )
+})
+
+# REQUIRE TEST arima with differenced first-order autoregressive ---------------
+test_that("REQUIRE TEST arima with differenced first-order autoregressive", {
+data(seatshare)
+subset <- seatshare[seatshare$country == "UNITED KINGDOM",]
+
+s.out <- zelig(unemp ~ leftseat, data = subset, model = "arima",
+               order = c(1, 1, 0)) %>%
+    setx(leftseat = 0.25)
+    expect_error(sim(s.out), NA)
+})
+
+# FAIL TEST when data is not found (not exclusive to arima) --------------------
+test_that("FAIL TEST when data is not found (not exclusive to arima)", {
+    expect_error(zelig(formula = unemp ~ leftseat, model = "ma", ts = "year",
+                       data = subset),
+                 "data not found")
+})
+
+# REQUIRE TEST timeseries deprecation ------------------------------------------
+test_that("REQUIRE TEST timeseries deprecation", {
+    data(seatshare)
+    subset <- seatshare[seatshare$country == "UNITED KINGDOM",]
+    expect_warning(
+        ts.out <- zelig(formula = unemp ~ leftseat, order = c(1, 0, 0), ts = "year",
+                    data = subset, model = "arima"),
+        "All Zelig time series models are deprecated"
+    )
+})
diff --git a/tests/testthat/test-assertions.R b/tests/testthat/test-assertions.R
new file mode 100644
index 0000000..3c101a5
--- /dev/null
+++ b/tests/testthat/test-assertions.R
@@ -0,0 +1,61 @@
+# FAIL TESTS no Zelig model included -------------------------------------------
+test_that('FAIL TEST setx method error if missing Zelig model estimation', {
+    z5 <- zls$new()
+
+    expect_error(z5$setx(), 'Zelig model has not been estimated.')
+})
+
+test_that('FAIL TEST setrange method error if missing Zelig model estimation', {
+    z5 <- zls$new()
+
+    expect_error(z5$setrange(), 'Zelig model has not been estimated.')
+})
+
+test_that('FAIL TEST sim method error if missing Zelig model estimation', {
+    z5 <- zls$new()
+
+    expect_error(z5$sim(), 'Zelig model has not been estimated.')
+})
+
+test_that('FAIL TEST graph method error if missing Zelig model estimation', {
+    z5 <- zls$new()
+
+    expect_error(z5$graph(), 'Zelig model has not been estimated.')
+})
+
+# FAIL TEST insufficient inputs for sim ----------------------------------------
+test_that('FAIL TEST sim method error if missing Zelig model estimation', {
+    z5 <- zls$new()
+
+    expect_error(z5$sim(), 'Zelig model has not been estimated.')
+})
+
+# FAIL TEST length is not greater than 1 ---------------------------------------
+test_that('FAIL TEST length is not greater than 1', {
+    not_more_1 <- 1
+    expect_error(is_length_not_1(not_more_1), 'Length is 1.')
+})
+
+# FAIL TEST vector does not vary -----------------------------------------------
+test_that('FAIL TEST vector does not vary', {
+    expect_error(is_varying(c(rep(1, 5))), 'Vector does not vary.')
+})
+
+# REQUIRE TEST vector does not vary --------------------------------------------
+test_that('REQIURE TEST vector does not vary', {
+    expect_true(is_varying(c(1, 2, 3), fail = FALSE))
+})
+
+# FAIL TEST is_simsx error message ---------------------------------------------
+test_that('FAIL TEST is_simsx error message', {
+    z <- zls$new()
+    expect_error(is_simsx(z$sim.out), 
+                 'Simulations for individual fitted values are not present.')
+})
+
+# FAIL TEST is_timeseries ------------------------------------------------------
+test_that('FAIL TEST is_timeseries', {
+    z <- zls$new()
+    expect_false(is_timeseries(z))
+    expect_error(is_timeseries(z, fail = TRUE), 'Not a timeseries object.')
+}) 
diff --git a/tests/testthat/test-bayesdiagnostics.R b/tests/testthat/test-bayesdiagnostics.R
new file mode 100644
index 0000000..613d83f
--- /dev/null
+++ b/tests/testthat/test-bayesdiagnostics.R
@@ -0,0 +1,32 @@
+# REQUIRE TEST Bayes Diagnostics ---------------------------------------------
+
+test_that('REQUIRE TEST Bayes Diagnostics', {
+    set.seed("123")
+    data(macro)
+    expect_error(zelig(unem ~ gdp + capmob + trade, model = "normal.bayes",
+                       bootstrap = 100, data = macro),
+                 "Error: The bootstrap is not available for Markov chain Monte Carlo (MCMC) models.", fixed=TRUE)
+    z <- zelig(unem ~ gdp + capmob + trade, model = "normal.bayes", data = macro, verbose = FALSE)
+    geweke.test <- z$geweke.diag()
+    heidel.test <- z$heidel.diag()
+    raftery.test <- z$raftery.diag()
+    expect_equivalent(length(geweke.test),2)
+    expect_equivalent(length(heidel.test),30)
+    expect_equivalent(length(raftery.test),2)
+})
+
+test_that('REQUIRE TEST Bayes Diagnostics for factors', {
+    set.seed("123")
+    data(swiss)
+    names(swiss) <- c("Fert", "Agr", "Exam", "Educ", "Cath", "InfMort")
+    z <- zelig(~ Agr + Exam + Educ + Cath + InfMort,
+               model = "factor.bayes", data = swiss,
+               factors = 2, verbose = FALSE,
+               a0 = 1, b0 = 0.15, burnin = 500, mcmc = 5000)
+    geweke.test <- z$geweke.diag()
+    heidel.test <- z$heidel.diag()
+    raftery.test <- z$raftery.diag()
+    expect_equivalent(length(geweke.test),2)
+    expect_equivalent(length(heidel.test),90)
+    expect_equivalent(length(raftery.test),2)
+})
diff --git a/tests/testthat/test-createJSON.R b/tests/testthat/test-createJSON.R
new file mode 100644
index 0000000..62f27e9
--- /dev/null
+++ b/tests/testthat/test-createJSON.R
@@ -0,0 +1,10 @@
+# REQUIRE TEST toJSON ---------------------------------------------
+
+test_that('REQUIRE TEST toJSON', {
+	j <- createJSON(movefile=FALSE)
+    	expect_true(j)
+    	mypath <- file.path("zelig5models.json")
+    expect_true(file.exists(mypath))
+    expect_true(validate(readChar(mypath, file.info(mypath)$size)))
+    file.remove(file.path(mypath))
+})
\ No newline at end of file
diff --git a/tests/testthat/test-exp.R b/tests/testthat/test-exp.R
new file mode 100644
index 0000000..af79968
--- /dev/null
+++ b/tests/testthat/test-exp.R
@@ -0,0 +1,19 @@
+# REQUIRE TEST Monte Carlo test exp ---------------------------------------------
+
+test_that('REQUIRE TEST exp Monte Carlo', {
+    set.seed(123)
+    z <- zexp$new()
+    test.exp <- z$mcunit(plot = FALSE)
+    expect_true(test.exp)
+})
+
+# REQUIRE TEST (minimal) documentation example -------------------------------------------
+
+test_that('REQUIRE TEST (minimal) documentation example', {
+    data(coalition)
+    z.out <- zelig(Surv(duration, ciep12) ~ fract + numst2, model = "exp",
+                   data = coalition)
+    x.low <- setx(z.out, numst2 = 0)
+    x.high <- setx(z.out, numst2 = 1)
+    expect_error(sim(z.out, x = x.low, x1 = x.high), NA)
+})
diff --git a/tests/testthat/test-gamma.R b/tests/testthat/test-gamma.R
new file mode 100644
index 0000000..878e081
--- /dev/null
+++ b/tests/testthat/test-gamma.R
@@ -0,0 +1,23 @@
+# REQUIRE TEST Monte Carlo test gamma ---------------------------------------------
+test_that('REQUIRE TEST gamma Monte Carlo', {
+    z <- zgamma$new()
+    test.gamma <- z$mcunit(b0 = 1, b1 = -0.6, alpha = 3, minx = 0, maxx = 1,
+                           nsim = 2000, ci = 0.99, plot = FALSE)
+    expect_true(test.gamma)
+})
+
+# REQUIRE TEST gamma example ---------------------------------------------------
+test_that('REQUIRE TEST gamma example', {
+    data(coalition)
+    z.out <- zelig(duration ~ fract + numst2, model = "gamma", data = coalition)
+    expect_error(plot(sim(setx(z.out))), NA)
+})
+
+# REQUIRE TEST gamma to_zelig --------------------------------------------------
+test_that('REQUIRE TEST gamma example', {
+    data(coalition)
+    m1 <- glm(duration ~ fract + numst2, family = Gamma(link="inverse"),
+                   data = coalition)
+    expect_message(setx(m1), 'Assuming zgamma to convert to Zelig.')
+    expect_error(plot(sim(setx(m1))), NA)
+})
diff --git a/tests/testthat/test-gammasurvey.R b/tests/testthat/test-gammasurvey.R
new file mode 100644
index 0000000..bd3e5e3
--- /dev/null
+++ b/tests/testthat/test-gammasurvey.R
@@ -0,0 +1,7 @@
+# REQUIRE TEST Monte Carlo test gammasurvey ---------------------------------------------
+
+test_that('REQUIRE TEST gammasurvey Monte Carlo', {
+    z <- zgammasurvey$new()
+    test.gammasurvey <- z$mcunit(b0=1, b1=-0.6, alpha=3, minx=0, maxx=1, nsim=2000, ci=.99, plot = FALSE)
+    expect_true(test.gammasurvey)
+})
\ No newline at end of file
diff --git a/tests/testthat/test-interface.R b/tests/testthat/test-interface.R
new file mode 100644
index 0000000..a83b560
--- /dev/null
+++ b/tests/testthat/test-interface.R
@@ -0,0 +1,111 @@
+# REQUIRE TEST from_zelig_model returns expected fitted model object -----------
+test_that('REQUIRE TEST from_zelig_model returns expected fitted model object', {
+    z5 <- zls$new()
+    z5$zelig(Fertility ~ Education, data = swiss)
+    expect_is(from_zelig_model(z5), class = 'lm')
+})
+
+
+# REQUIRE TEST zelig_qi_to_df setx, setrange, by --------------- ---------------
+test_that('REQUIRE TEST zelig_qi_to_df setx, setrange, by', {
+    #### QIs without first difference or range, from covariates fitted at
+    ## central tendencies
+    z.1 <- zelig(Petal.Width ~ Petal.Length + Species, data = iris,
+                 model = "ls")
+    z.1 <- setx(z.1)
+    expect_equal(names(zelig_setx_to_df(z.1)), c('Petal.Length', 'Species'))
+    z.1 <- sim(z.1)
+    expect_equal(nrow(zelig_qi_to_df(z.1)), 1000)
+
+    #### QIs for first differences
+    z.2 <- zelig(Petal.Width ~ Petal.Length + Species, data = iris,
+                 model = "ls")
+    z.2a <- setx(z.2, Petal.Length = 2)
+    z.2b <- setx(z.2, Petal.Length = 4.4)
+    z.2 <- sim(z.2, x = z.2a, x1 = z.2a)
+    z2_extracted <- zelig_qi_to_df(z.2)
+    expect_equal(nrow(z2_extracted), 2000)
+    expect_equal(names(z2_extracted), c("setx_value", "Petal.Length", "Species",
+                                        "expected_value", "predicted_value"))
+
+    #### QIs for first differences, estimated by Species
+    z.3 <- zelig(Petal.Width ~ Petal.Length, by = "Species", data = iris,
+                 model = "ls")
+    z.3a <- setx(z.3, Petal.Length = 2)
+    z.3b <- setx(z.3, Petal.Length = 4.4)
+    z.3 <- sim(z.3, x = z.3a, x1 = z.3a)
+    expect_equal(nrow(zelig_qi_to_df(z.3)), 6000)
+
+    #### QIs for a range of fitted values
+    z.4 <- zelig(Petal.Width ~ Petal.Length + Species, data = iris,
+                 model = "ls")
+    z.4 <- setx(z.4, Petal.Length = 2:4)
+    z.4 <- sim(z.4)
+    z4_extracted <- zelig_qi_to_df(z.4)
+    expect_equal(nrow(z4_extracted), 3000)
+    expect_is(z4_extracted, class = 'data.frame')
+
+    #### QIs for a range of fitted values, estimated by Species
+    z.5 <- zelig(Petal.Width ~ Petal.Length, by = "Species", data = iris,
+                model = "ls")
+    z.5 <- setx(z.5, Petal.Length = 2:4)
+    z.5 <- sim(z.5)
+    z5_extracted <- zelig_qi_to_df(z.5)
+    expect_equal(nrow(z5_extracted), 9000)
+    expect_equal(names(z5_extracted), c('setx_value', 'by', 'Petal.Length',
+                                        'expected_value', 'predicted_value'))
+
+    #### QIs for two ranges of fitted values
+    z.6 <- zelig(Petal.Width ~ Petal.Length + Species, data = iris,
+                model = "ls")
+    z.6a <- setx(z.6, Petal.Length = 2:4, Species = 'setosa')
+    z.6b <- setx(z.6, Petal.Length = 2:4, Species = 'virginica')
+    expect_equal(nrow(zelig_setx_to_df(z.6b)), 3)
+    z.6 <- sim(z.6, x = z.6a, x1 = z.6b)
+
+    expect_equal(nrow(zelig_qi_to_df(z.6)), 6000)
+})
+
+# REQUIRE TEST zelig_qi_to_df multinomial outcome ------------------------------
+test_that('REQUIRE TEST zelig_qi_to_df multinomial outcome', {
+    library(dplyr)
+    set.seed(123)
+    data(mexico)
+    sims1_setx <- zelig(vote88 ~ pristr + othcok + othsocok,
+                        model = "mlogit.bayes", data = mexico,
+                        verbose = FALSE) %>%
+        setx() %>%
+        sim() %>%
+        zelig_qi_to_df()
+
+    sims1_setrange <- zelig(vote88 ~ pristr + othcok + othsocok,
+                            model = "mlogit.bayes", data = mexico,
+                            verbose = FALSE) %>%
+        setx(pristr = 1:3) %>%
+        sim() %>%
+        zelig_qi_to_df()
+
+    expected_col_names <- c("setx_value", "pristr", "othcok", "othsocok",
+                            "expected_P(Y=1)", "expected_P(Y=2)",
+                            "expected_P(Y=3)", "predicted_value")
+    expect_equal(names(sims1_setx), expected_col_names)
+    expect_equal(names(sims1_setrange), expected_col_names)
+
+    slimmed_setx <- qi_slimmer(sims1_setx, qi_type = "expected_P(Y=2)")
+    expect_lt(slimmed_setx$qi_ci_median, 0.25)
+    slimmed_setrange <- qi_slimmer(sims1_setrange, qi_type = "predicted_value")
+    expected_sr_colnames <- c("setx_value", "pristr", "othcok", "othsocok",
+                              "predicted_proportion_(Y=1)",
+                              "predicted_proportion_(Y=2)",
+                              "predicted_proportion_(Y=3)")
+    expect_equal(names(slimmed_setrange), expected_sr_colnames)
+})
+
+# FAIL TEST to_zelig failure with unsupported model ----------------------------
+test_that('FAIL TEST to_zelig failure with unsupported model', {
+    x <- rnorm(100)
+    y <- rpois(100, exp(1 + x))
+    m1 <- glm(y ~ x, family = quasi(variance = "mu", link = "log"))
+    expect_error(setx(m1), "Not a Zelig object and not convertible to one.")
+    expect_error(setx(x), "Not a Zelig object and not convertible to one.")
+})
diff --git a/tests/testthat/test-ivreg.R b/tests/testthat/test-ivreg.R
new file mode 100644
index 0000000..7c3c017
--- /dev/null
+++ b/tests/testthat/test-ivreg.R
@@ -0,0 +1,69 @@
+# REQUIRE TEST ivreg Monte Carlo -----------------------------------------------
+#test_that("REQUIRE Test ivreg Monte Carlo", {
+#    z <- zivreg$new()
+#    test.ivreg <- z$mcunit(plot = FALSE)
+#    expect_true(test.ivreg)
+#})
+
+# REQUIRE TEST ivreg AER example with log transformations ----------------------
+test_that("REQUIRE TEST ivreg AER example with log transformations", {
+    library(AER)
+    # Example from AER (version 1.2-5) documentation
+    data("CigarettesSW")
+    CigarettesSW$rprice <- with(CigarettesSW, price/cpi)
+    CigarettesSW$rincome <- with(CigarettesSW, income/population/cpi)
+    CigarettesSW$tdiff <- with(CigarettesSW, (taxs - tax)/cpi)
+    CigarettesSW1995 <- subset(CigarettesSW, year == 1995)
+
+    # Unwrapped
+    fm <- ivreg(log(packs) ~ log(rprice) + log(rincome) |
+                log(rincome) + tdiff + I(tax/cpi),
+                data = CigarettesSW1995)
+
+    # Zelig wrapped
+    CigarettesSW1995$log_rprice <- log(CigarettesSW1995$rprice)
+    CigarettesSW1995$log_rincome <- log(CigarettesSW1995$rincome)
+    ziv.out <- zelig(log(packs) ~ log_rprice + log_rincome |
+                    log_rincome + tdiff + I(tax/cpi),
+                    data = CigarettesSW1995,
+                    model = 'ivreg')
+    expect_equal(coef(fm)[[2]], coef(ziv.out)[[2]])
+    expect_equivalent(vcov(fm), vcov(ziv.out)[[1]])
+})
+
+# REQUIRE TEST ivreg setx and sim ----------------------------------------------
+test_that("REQUIRE TEST ivreg setx", {
+    data("CigarettesSW")
+    CigarettesSW$rprice <- with(CigarettesSW, price/cpi)
+    CigarettesSW$rincome <- with(CigarettesSW, income/population/cpi)
+    CigarettesSW$tdiff <- with(CigarettesSW, (taxs - tax)/cpi)
+    CigarettesSW1995 <- subset(CigarettesSW, year == 1995)
+
+    CigarettesSW1995$log_rprice <- log(CigarettesSW1995$rprice)
+    CigarettesSW1995$log_rincome <- log(CigarettesSW1995$rincome)
+
+    ziv.out <- zelig(log(packs) ~ log_rprice + log_rincome |
+                log_rincome + tdiff + I(tax/cpi),
+                data = CigarettesSW1995, model = 'ivreg')
+    ziv.set <- setx(ziv.out, log_rprice = log(95:118))
+    expect_equal(length(ziv.set$setx.out$range), 24)
+    expect_error(sim(ziv.set), NA)
+
+    expect_error(plot(sim(ziv.set)), NA)
+})
+
+# FAIL TEST ivreg with 2nd stage covariates logged in zelig call ---------------
+test_that("FAIL TEST ivreg with 2nd stage covariates logged in zelig call", {
+    data("CigarettesSW")
+    CigarettesSW$rprice <- with(CigarettesSW, price/cpi)
+    CigarettesSW$rincome <- with(CigarettesSW, income/population/cpi)
+    CigarettesSW$tdiff <- with(CigarettesSW, (taxs - tax)/cpi)
+    CigarettesSW1995 <- subset(CigarettesSW, year == 1995)
+
+    expect_error(
+    ziv.out <- zelig(log(packs) ~ log(rprice) + log(rincome) |
+                         log(rincome) + tdiff + I(tax/cpi),
+                     data = CigarettesSW1995, model = 'ivreg'),
+    "logging values in the zelig call is not currently supported for ivreg models."
+    )
+})
diff --git a/tests/testthat/test-logit.R b/tests/testthat/test-logit.R
new file mode 100755
index 0000000..135a493
--- /dev/null
+++ b/tests/testthat/test-logit.R
@@ -0,0 +1,29 @@
+# REQUIRE TEST Monte Carlo test logit ------------------------------------------
+test_that('REQUIRE TEST logit Monte Carlo', {
+    z <- zlogit$new()
+    test <- z$mcunit(minx = -2, maxx = 2, plot = FALSE)
+    expect_true(test)
+})
+
+# REQUIRE TEST logit example and show odds_ratios ------------------------------
+test_that('REQUIRE TEST logit example and show odds_ratios', {
+    data(turnout)
+    z.out1 <- zelig(vote ~ age + race, model = "logit", data = turnout,
+                    cite = FALSE)
+
+    betas <- coef(z.out1)
+    ors <- summary(z.out1, odds_ratios = TRUE)
+    ors <- ors$summ[[1]]$coefficients[1:3]
+
+    expect_equal(exp(betas)[[1]], ors[1])
+})
+
+# REQUIRE TEST logit example and show odds_ratios ------------------------------
+test_that('REQUIRE TEST logit to_zelig', {
+    data(turnout)
+    m1 <- glm(vote ~ age + race, family = binomial(link="logit"),
+              data = turnout)
+    m1_sims <- sim(setx(m1))
+    expect_equal(sort(unique(zelig_qi_to_df(m1_sims)$predicted_value)), c(0, 1))
+
+})
diff --git a/tests/testthat/test-logitbayes.R b/tests/testthat/test-logitbayes.R
new file mode 100644
index 0000000..2fff8ab
--- /dev/null
+++ b/tests/testthat/test-logitbayes.R
@@ -0,0 +1,7 @@
+# REQUIRE TEST Monte Carlo test logitbayes -------------------------------------
+
+test_that('REQUIRE TEST logitbayes Monte Carlo', {
+    z <- zlogitbayes$new()
+    test.logitbayes <- z$mcunit(nsim = 2000, ci = 0.99, plot = FALSE)
+    expect_true(test.logitbayes)
+})
diff --git a/tests/testthat/test-logitsurvey.R b/tests/testthat/test-logitsurvey.R
new file mode 100644
index 0000000..a97bec9
--- /dev/null
+++ b/tests/testthat/test-logitsurvey.R
@@ -0,0 +1,7 @@
+# REQUIRE TEST Monte Carlo test logitsurvey ---------------------------------------------
+
+test_that('REQUIRE TEST logitsurvey Monte Carlo', {
+    z <- zlogitsurvey$new()
+    test.logitsurvey <- z$mcunit(plot = FALSE, ci=0.99)
+    expect_true(test.logitsurvey)
+})
\ No newline at end of file
diff --git a/tests/testthat/test-lognom.R b/tests/testthat/test-lognom.R
new file mode 100755
index 0000000..96bfba3
--- /dev/null
+++ b/tests/testthat/test-lognom.R
@@ -0,0 +1,3 @@
+z <- zlognorm$new()
+test.lognorm <- z$mcunit(minx=0, ci=0.99, nsim=1000, plot=FALSE)
+expect_true(test.lognorm)
\ No newline at end of file
diff --git a/tests/testthat/test-ls.R b/tests/testthat/test-ls.R
new file mode 100755
index 0000000..04974c9
--- /dev/null
+++ b/tests/testthat/test-ls.R
@@ -0,0 +1,163 @@
+# REQUIRE TEST Monte Carlo test ls ---------------------------------------------
+
+test_that('REQUIRE TEST ls Monte Carlo', {
+    z <- zls$new()
+    test.ls <- z$mcunit(plot = FALSE)
+    expect_true(test.ls)
+})
+
+# REQUIRE TEST ls with continuous covar -----------------------------------------
+
+test_that('REQUIRE TEST ls continuous covar -- quickstart (Zelig 5 syntax)', {
+    z5 <- zls$new()
+    z5$zelig(Fertility ~ Education, data = swiss)
+
+    # extract education coefficient parameter estimate and compare to reference
+    expect_equivalent(round(as.numeric(z5$get_coef()[[1]][2]), 7), -0.8623503)
+})
+
+
+# REQUIRE TEST ls with by -------------------------------------------------------
+
+test_that('REQUIRE TEST ls with by', {
+    # Majority Catholic dummy
+    swiss$maj_catholic <- cut(swiss$Catholic, breaks = c(0, 51, 100))
+
+    z5by <- zls$new()
+    z5by$zelig(Fertility ~ Education, data = swiss, by = 'maj_catholic')
+    z5by$setx()
+    z5by$sim()
+    sims_df <- zelig_qi_to_df(z5by)
+    expect_equal(length(unique(sims_df$by)), 2)
+})
+
+# REQUIRE TEST gim method ------------------------------------------------------
+
+#test_that('REQUIRE TESTls gim method', {
+    #z5$gim()
+#})
+
+
+# REQUIRE TEST for sim with ls models including factor levels ------------------
+test_that('REQUIRE TEST for sim with models including factor levels', {
+    expect_is(iris$Species, 'factor')
+    z.out <- zelig(Petal.Width ~ Petal.Length + Species, data = iris,
+                   model = "ls")
+    x.out1 <- setx(z.out, Petal.Length = 1:10)
+    sims1 <- sim(z.out, x.out1)
+    expect_equal(length(sims1$sim.out$range), 10)
+
+    x.out2 <- setx(z.out, Petal.Length = 1:10, fn = list(numeric = Median))
+    sims2 <- sim(z.out, x.out2)
+    expect_equal(length(sims2$sim.out$range), 10)
+})
+
+# REQUIRE TEST for set with ls models including factors set within zelig call ----
+test_that('REQUIRE TEST for set with ls models including factors set within zelig call', {
+    data(macro)
+    z1 <- zelig(unem ~ gdp + trade + capmob + as.factor(country),
+             model = "ls", data = macro)
+    setUS1 <- setx(z1, country = "United States")
+
+    z2 <- zelig(unem ~ gdp + trade + capmob + factor(country,
+                                                    labels=letters[1:14]),
+                model = "ls", data = macro)
+    setUS2 <- setx(z2, country = "m")
+
+    macro$country <- as.factor(macro$country)
+    z3 <- zelig(unem ~ gdp + trade + capmob + country,
+                model = "ls", data = macro)
+    setUS3 <- setx(z3, country = "United States")
+
+    expect_equal(setUS1$setx.out$x$mm[[1]][[16]], 1)
+    expect_equal(setUS2$setx.out$x$mm[[1]][[16]], 1)
+    expect_equal(setUS1$setx.out$x$mm[[1]][[16]],
+                 setUS3$setx.out$x$mm[[1]][[16]])
+    expect_equal(setUS2$setx.out$x$mm[[1]][[16]],
+                 setUS3$setx.out$x$mm[[1]][[16]])
+})
+
+# REQUIRE TEST for set with ls models including natural logs set within zelig call --
+test_that('REQUIRE TEST for set with ls models including natural logs set within zelig call', {
+    z1 <- zelig(speed ~ log(dist), data = cars, model = 'ls')
+    setd1 <- setx(z1, dist = log(15))
+
+    cars$dist <- log(cars$dist)
+    z2 <- zelig(speed ~ dist, data = cars, model = 'ls')
+    setd2 <- setx(z2, dist = log(15))
+
+    expect_equal(round(setd1$setx.out$x$mm[[1]][[2]], digits = 5), 2.70805)
+    expect_equal(setd1$setx.out$x$mm[[1]][[2]],
+               setd2$setx.out$x$mm[[1]][[2]])
+
+    z3.1 <- zelig(Sepal.Length ~ log10(Petal.Length) + log(Sepal.Width),
+              model = 'ls', data = iris, cite = FALSE)
+    z3.2 <- zelig(Sepal.Length ~ log(Petal.Length, base = 10) +
+                      log(Sepal.Width),
+              model = 'ls', data = iris, cite = FALSE)
+    expect_equal(unname(coef(z3.1)), unname(coef(z3.2)))
+
+    setz3 <- setx(z3.1)
+#    expect_equal(as.vector(round(unlist(setz3$setx.out$x), digits = 2)),
+#                c(1, 1, 1.47, 1.12))
+})
+
+# REQUIRE TEST for ls with interactions ----------------------------------------
+test_that('REQUIRE TEST for ls with interactions', {
+    states <- as.data.frame(state.x77)
+    z <- zelig(Murder ~ Income * Population, data = states, model = 'ls')
+    s1 <- setx(z, Population = 1500:1600, Income = 3098)
+    s2 <- setx(z, Population = 1500:1600, Income = 6315)
+
+    expect_equal(length(s1$setx.out$range), 101)
+    expect_equal(length(s2$setx.out$range), 101)
+})
+
+# REQUIRE TEST for ls with unrecognised variable name --------------------------
+test_that('REQUIRE TEST for ls with unrecognised variable name', {
+  states <- as.data.frame(state.x77)
+  z <- zelig(Murder ~ Income * Population, data = states, model = 'ls')
+  expect_error(setx(z, population = 1500:1600, Income = 3098),
+               "Variable 'population' not in data set.")
+})
+
+# REQUIRE TEST for ls setrange with equal length ranges ------------------------
+test_that('REQUIRE TEST for ls setrange with equal length ranges and polynomials', {
+    iris.poly <- cbind(iris, I(iris$Petal.Length^2))
+    names(iris.poly)[ncol(iris.poly)] <- 'pl_2'
+    pl_range <- 1:7
+
+    # Polynomial found outside of formula
+    z.cars1 <- zelig(Sepal.Length ~ Petal.Length + pl_2 + Species,
+                      data = iris.poly, model = 'ls', cite = FALSE)
+    z.cars1 <- setx(z.cars1, Species = 'virginica', Petal.Length = pl_range,
+                   pl_2 = pl_range^2)
+    expect_equal(nrow(zelig_setx_to_df(z.cars1)), length(pl_range))
+
+    # Polynomial found in formula
+    z.cars2 <- zelig(Sepal.Length ~ Petal.Length + I(Petal.Length^2) + Species,
+                      data = iris, model = 'ls', cite = FALSE)
+    z.cars2 <- setx(z.cars2, Species = 'virginica', Petal.Length = pl_range)
+    expect_equal(nrow(zelig_setx_to_df(z.cars2)), length(pl_range))
+    expect_equal(zelig_setx_to_df(z.cars1)[[2]], zelig_setx_to_df(z.cars2)[[2]])
+})
+
+# REQUIRE TEST for . formulas --------------------------------------------------
+test_that('REQUIRE TEST for . formulas', {
+    z1 <- zelig(speed ~ ., data = cars, model = 'ls')
+    zset <- setx(z1, dist = 5)
+    expect_equal(names(coef(z1)), c("(Intercept)", "dist"))
+})
+
+# REQUIRE TEST for to_zelig within setx ----------------------------------------
+test_that('REQUIRE TEST for to_zelig within setx', {
+    m1 <- lm(speed ~ dist, data = cars)
+    zset <- setx(m1, dist = 5)
+    expect_equal(zset$setx.out$x$mm[[1]][2], 5)
+    plot(sim(zset))
+
+    m2 <- glm(speed ~ dist, data = cars, family = gaussian(link = "identity"))
+    zset <- setx(m1, dist = 5)
+    expect_equal(zset$setx.out$x$mm[[1]][2], 5)
+    plot(sim(zset))
+})
diff --git a/tests/testthat/test-matchit.R b/tests/testthat/test-matchit.R
new file mode 100644
index 0000000..0f7a2e6
--- /dev/null
+++ b/tests/testthat/test-matchit.R
@@ -0,0 +1,21 @@
+# REQUIRE TEST for matched data using MatchIt ----------------------------------
+
+#test_that('REQUIRE TEST for matched data using MatchIt', {
+#    library(MatchIt)
+#    library(optmatch)
+
+#    data(lalonde)
+#    m.out <- matchit(treat ~ educ + black + hispan + age, data = lalonde,
+#                     method = "optimal")
+
+#    z.out <- zelig(educ ~ treat + age, model = "ls", data = m.out)
+#    s.out <- setx(z.out)
+
+#    z.outl <- zelig(educ ~ treat + log(age), model = "ls", data = m.out)
+#    s.outl <- setx(z.outl)
+
+#    expect_false(s.out$setx.out$x$mm[[1]][3] == s.outl$setx.out$x$mm[[1]][3])
+#})
+
+
+# Not run due to unresolved environment issue
diff --git a/tests/testthat/test-mlogit.R b/tests/testthat/test-mlogit.R
deleted file mode 100644
index 34cbc66..0000000
--- a/tests/testthat/test-mlogit.R
+++ /dev/null
@@ -1,23 +0,0 @@
-# REQUIRE TEST mlogit example --------------------------------------------------
-
-test_that('REQUIRE TEST mlogit example', {
-    data(mexico)
-    z.out <- zelig(as.factor(vote88) ~ pristr + othcok + othsocok,
-                 model = "mlogit", data = mexico)
-    x.out <- setx(z.out)
-    expect_error(s.out <- sim(z.out, x.out), NA)
-})
-
-# REQUIRE TEST mlogit getters --------------------------------------------------
-test_that('REQUIRE TEST mlogit from_zelig_model', {
-  data(mexico)
-  z.out1 <- zelig(as.factor(vote88) ~ pristr + othcok + othsocok,
-                  model = "mlogit", data = mexico, cite = F)
-  
-  expect_equal(length(coef(z.out1)), 8)
-  expect_equal(class(from_zelig_model(z.out1))[[1]], "vglm")
-
-  expect_equal(length(z.out1$get_pvalue()[[1]]), 8)
-  expect_equal(length(z.out1$get_se()[[1]]), 8)
-  expect_false(any(z.out1$get_pvalue()[[1]] == z.out1$get_se()[[1]]))
-})
\ No newline at end of file
diff --git a/tests/testthat/test-negbin.R b/tests/testthat/test-negbin.R
new file mode 100644
index 0000000..21c99a3
--- /dev/null
+++ b/tests/testthat/test-negbin.R
@@ -0,0 +1,9 @@
+
+# REQUIRE TEST Monte Carlo test negbin ---------------------------------------------
+
+test_that('REQUIRE TEST negbin Monte Carlo', {
+    set.seed(123)
+    z <- znegbin$new()
+    test.negbin <- z$mcunit(plot=FALSE)
+    expect_true(test.negbin)
+})
diff --git a/tests/testthat/test-normal-gee.R b/tests/testthat/test-normal-gee.R
new file mode 100644
index 0000000..e770d4c
--- /dev/null
+++ b/tests/testthat/test-normal-gee.R
@@ -0,0 +1,51 @@
+# REQUIRE TEST normal.gee with . formula ---------------------------------------
+test_that('REQUIRE TEST normal.gee with . formula', {
+    # test initially created by @andreashandel
+    library(dplyr)
+
+    # make some fake cluster ID
+    mtcars$myid = sample(1:10, size = nrow(mtcars), replace = TRUE)
+
+    # sort by cluster ID
+    mydata <- mtcars %>% dplyr::arrange(myid)
+
+    m1 <- geepack::geeglm(formula = mpg ~ ., family = gaussian, data = mydata,
+                    id = mydata$myid) #this works
+
+    z1 <- zelig(formula = mpg ~ ., model = "normal.gee", id = "myid",
+                       data = mydata)
+
+    expect_equal(coef(m1), coef(z1))
+
+    z.set <- setx(z1)
+    z.sim <- sim(z.set)
+
+    expect_equal(nrow(zelig_qi_to_df(z.sim)), 1000)
+})
+
+# REQUIRE TEST normal.gee with multiply imputed data ---------------------------
+test_that('REQUIRE TEST normal.gee with . formula', {
+    # test initially created by @andreashandel
+    library(dplyr)
+
+    # make some fake cluster ID
+    mtcars$myid = sample(1:10, size = nrow(mtcars), replace = TRUE)
+
+    # sort by cluster ID
+    mydata1 <- mtcars %>% dplyr::arrange(myid) %>% as.data.frame
+    mydata2 = mydata1
+
+    # create MI data
+    mydata_mi <- to_zelig_mi(mydata1, mydata2)
+
+    zmi <- zelig(formula = mpg ~ cyl + disp, model = "normal.gee", id = "myid",
+                data = mydata_mi)
+
+    expect_error(summary(zmi), NA)
+
+    z.set <- setx(zmi)
+    z.sim <- sim(z.set)
+
+    expect_equal(nrow(zelig_qi_to_df(z.sim)), 1000)
+})
+
diff --git a/tests/testthat/test-normal.R b/tests/testthat/test-normal.R
new file mode 100644
index 0000000..f1682d7
--- /dev/null
+++ b/tests/testthat/test-normal.R
@@ -0,0 +1,8 @@
+# REQUIRE TEST Monte Carlo test normal ---------------------------------------------
+
+test_that('REQUIRE TEST normal Monte Carlo', {
+    set.seed(123)
+    z <- znormal$new()
+    test.normal <- z$mcunit(plot = FALSE)
+    expect_true(test.normal)
+})
diff --git a/tests/testthat/test-normalbayes.R b/tests/testthat/test-normalbayes.R
new file mode 100644
index 0000000..312c839
--- /dev/null
+++ b/tests/testthat/test-normalbayes.R
@@ -0,0 +1,8 @@
+# REQUIRE TEST Monte Carlo test normalbayes ---------------------------------------------
+
+test_that('REQUIRE TEST normalbayes Monte Carlo', {
+	set.seed(123)
+    z <- znormalbayes$new()
+    test.normalbayes <- z$mcunit(minx=-1, maxx = 1, ci=0.99, nsim=2000, plot = TRUE)
+    expect_true(test.normalbayes)
+})
\ No newline at end of file
diff --git a/tests/testthat/test-normalsurvey.R b/tests/testthat/test-normalsurvey.R
new file mode 100644
index 0000000..7d4ef6d
--- /dev/null
+++ b/tests/testthat/test-normalsurvey.R
@@ -0,0 +1,15 @@
+# REQUIRE TEST Monte Carlo test normalsurvey -----------------------------------
+test_that('REQUIRE TEST normalsurvey Monte Carlo', {
+    z <- znormalsurvey$new()
+    test.normalsurvey <- z$mcunit(plot = FALSE)
+    expect_true(test.normalsurvey)
+})
+
+# REQUIRE TEST to_zelig for normalsurvey ---------------------------------------
+test_that('REQUIRE TEST to_zelig for normalsurvey', {
+    data(api)
+    dstrat <- svydesign(id = ~1, strata = ~stype, weights = ~pw, data = apistrat,
+                      fpc = ~fpc)
+    m1 <- svyglm(api00 ~ ell + meals + mobility, design = dstrat)
+    expect_error(plot(sim(setx(m1))), NA)
+})
diff --git a/tests/testthat/test-ologit.R b/tests/testthat/test-ologit.R
deleted file mode 100755
index 5af61e7..0000000
--- a/tests/testthat/test-ologit.R
+++ /dev/null
@@ -1,20 +0,0 @@
-#### Ordered Logistic Regression Tests ####
-
-# REQUIRE TEST Monte Carlo ologit ----------------------------------------------
-test_that('REQUIRE TEST Monte Carlo ologit', {
-    z <- zologit$new()
-    test <- z$mcunit(minx = 0, maxx = 2, plot = FALSE)
-    expect_true(test)
-})
-
-# REQUIRE TEST ologit doc example ----------------------------------------------
-test_that('REQUIRE TEST ologit doc example', {
-    data(sanction)
-    sanction$ncost <- factor(sanction$ncost, ordered = TRUE,
-                         levels = c("net gain", "little effect", "modest loss",
-                                    "major loss"))
-    z.out <- zelig(ncost ~ mil + coop, model = "ologit", data = sanction)
-    x.out <- setx(z.out)
-    s.out <- sim(z.out, x = x.out)
-    expect_equal(names(s.out$sim.out[[1]]), c('ev', 'pv'))
-})
diff --git a/tests/testthat/test-oprobit.R b/tests/testthat/test-oprobit.R
deleted file mode 100755
index 49a0e3b..0000000
--- a/tests/testthat/test-oprobit.R
+++ /dev/null
@@ -1,3 +0,0 @@
-z <- zoprobit$new()
-test <- z$mcunit(minx=0, maxx=2, plot=FALSE)
-expect_true(test)
\ No newline at end of file
diff --git a/tests/testthat/test-plots.R b/tests/testthat/test-plots.R
new file mode 100644
index 0000000..87095f9
--- /dev/null
+++ b/tests/testthat/test-plots.R
@@ -0,0 +1,42 @@
+
+# FAIL TEST ci.plot if simrange is not supplied --------------------------------
+test_that('FAIL TEST ci.plot if simrange is not supplied', {
+    z <- zls$new()
+    z$zelig(Fertility ~ Education, data = swiss)
+
+    expect_error(ci.plot(z),
+                 'Simulations for a range of fitted values are not present.')
+})
+
+# FAIL TEST ci.plot first difference setrange and setrange1 same length --------
+test_that('FAIL TEST ci.plot first difference setrange and setrange1 same length', {
+    z <- zls$new()
+    z$zelig(Fertility ~ Education, data = swiss)
+    z$setrange(Education = 5:15)
+    z$setrange1(Education = 10:11)
+    z$sim()
+
+    expect_error(z$graph(), 'The two fitted data ranges are not the same length.')
+
+    # REQUIRE TEST for first difference over a range plots
+    z <- zls$new()
+    z$zelig(Fertility ~ Education, data = swiss)
+    z$setrange(Education = 5:15)
+    z$setrange1(Education = 15:25)
+    z$sim()
+    expect_error(z$graph(), NA)
+})
+
+# REQUIRE TEST ordered plots ---------------------------------------------
+
+test_that('REQUIRE TEST ordered plots', {
+    data(sanction)
+    sanction$ncost <- factor(sanction$ncost, ordered = TRUE, levels = c("net gain", "little effect", "modest loss", "major loss"))
+    z.out <- zoprobitbayes$new()
+    z.out$zelig(ncost ~ mil + coop, data = sanction, verbose = FALSE)
+    z.out$setx(mil=0)
+    z.out$setx1(mil=1)
+    z.out$sim()
+    expect_true(is.null(plot(z.out)))
+})
+
diff --git a/tests/testthat/test-poisson.R b/tests/testthat/test-poisson.R
new file mode 100644
index 0000000..c41838d
--- /dev/null
+++ b/tests/testthat/test-poisson.R
@@ -0,0 +1,32 @@
+# REQUIRE TEST Monte Carlo poisson ---------------------------------------------
+test_that('REQUIRE TEST Monte Carlo poisson', {
+    set.seed("123")
+    z <- zpoisson$new()
+    test.poisson <- z$mcunit(minx = 0, plot = FALSE)
+    expect_true(test.poisson)
+})
+
+# REQUIRE TEST poisson example -------------------------------------------------
+test_that('REQUIRE TEST poisson example', {
+    data(sanction)
+    z.out <- zelig(num ~ target + coop, model = "poisson", data = sanction)
+    x.out <- setx(z.out)
+    s.out <- sim(z.out, x = x.out)
+    expect_error(s.out$graph(), NA)
+})
+
+# REQUIRE TEST poisson get_pvalue -------------------------------------------------
+test_that('REQUIRE TEST poisson example', {
+  data(sanction)
+  z.out <- zelig(num ~ target + coop, model = "poisson", data = sanction)
+  expect_error(z.out$get_pvalue(), NA)
+})
+
+# REQUIRE TEST poisson to_zelig -------------------------------------------------
+test_that('REQUIRE TEST poisson example', {
+    data(sanction)
+    m1 <- glm(num ~ target + coop, family = poisson("log"),
+              data = sanction)
+    zset <- setx(m1, target = 2)
+    expect_equal(zset$setx.out$x$mm[[1]][2], 2)
+})
diff --git a/tests/testthat/test-poissonbayes.R b/tests/testthat/test-poissonbayes.R
new file mode 100644
index 0000000..d0a1644
--- /dev/null
+++ b/tests/testthat/test-poissonbayes.R
@@ -0,0 +1,7 @@
+# REQUIRE TEST Monte Carlo test poissonbayes ---------------------------------------------
+
+test_that('REQUIRE TEST poissonbayes Monte Carlo', {
+    z <- zpoissonbayes$new()
+    test.poissonbayes <- z$mcunit(minx=1, nsim = 2000, ci=0.99, plot = FALSE)
+    expect_true(test.poissonbayes)
+})
\ No newline at end of file
diff --git a/tests/testthat/test-poissonsurvey.R b/tests/testthat/test-poissonsurvey.R
new file mode 100644
index 0000000..cee71f4
--- /dev/null
+++ b/tests/testthat/test-poissonsurvey.R
@@ -0,0 +1,8 @@
+# REQUIRE TEST Monte Carlo test poissonsurvey ---------------------------------------------
+
+test_that('REQUIRE TEST poissonsurvey Monte Carlo', {
+    set.seed("123")
+    z <- zpoissonsurvey$new()
+    test.poissonsurvey <- z$mcunit(plot = FALSE)
+    expect_true(test.poissonsurvey)
+})
\ No newline at end of file
diff --git a/tests/testthat/test-probit.R b/tests/testthat/test-probit.R
new file mode 100644
index 0000000..a5c9c5f
--- /dev/null
+++ b/tests/testthat/test-probit.R
@@ -0,0 +1,26 @@
+# REQUIRE TEST probit mc -------------------------------------------------------
+test_that("REQUIRE TEST probit mc", {
+    z <- zprobit$new()
+    test.probit <- z$mcunit(plot = FALSE)
+    expect_true(test.probit)
+})
+
+# REQUIRE TEST probit example --------------------------------------------------
+test_that("REQUIRE TEST probit example", {
+    data(turnout)
+    z.out <- zelig(vote ~ race + educate, model = "probit", data = turnout)
+    x.out <- setx(z.out)
+    s.out <- sim(z.out, x = x.out)
+    expect_equal(sort(unique(zelig_qi_to_df(s.out)$predicted_value)), c(0, 1))
+})
+
+# REQUIRE TEST probit to_zelig ------------------------------------------------
+test_that('REQUIRE TEST probit example', {
+    data(turnout)
+    m1 <- glm(vote ~ race + educate, family = binomial("probit"),
+              data = turnout)
+    m1.out <- setx(m1)
+    m1.out <- sim(m1.out)
+    expect_equal(sort(unique(zelig_qi_to_df(m1.out)$predicted_value)), c(0, 1))
+    expect_error(plot(sim(setx(m1))), NA)
+})
diff --git a/tests/testthat/test-probitbayes.R b/tests/testthat/test-probitbayes.R
new file mode 100644
index 0000000..19017b4
--- /dev/null
+++ b/tests/testthat/test-probitbayes.R
@@ -0,0 +1,7 @@
+# REQUIRE TEST Monte Carlo test probitbayes ---------------------------------------------
+
+test_that('REQUIRE TEST probitbayes Monte Carlo', {
+    z <- zprobitbayes$new()
+    test.probitbayes <- z$mcunit(minx=-1, maxx = 1, ci=0.99, nsim=2000, plot = FALSE)
+    expect_true(test.probitbayes)
+})
\ No newline at end of file
diff --git a/tests/testthat/test-probitsurvey.R b/tests/testthat/test-probitsurvey.R
new file mode 100644
index 0000000..d8a1211
--- /dev/null
+++ b/tests/testthat/test-probitsurvey.R
@@ -0,0 +1,7 @@
+# REQUIRE TEST Monte Carlo test probitsurvey ---------------------------------------------
+
+test_that('REQUIRE TEST probitsurvey Monte Carlo', {
+    z <- zprobitsurvey$new()
+    test.probitsurvey <- z$mcunit(minx = -1, maxx = 1, plot = FALSE)
+    expect_true(test.probitsurvey)
+})
\ No newline at end of file
diff --git a/tests/testthat/test-qislimmer.R b/tests/testthat/test-qislimmer.R
new file mode 100644
index 0000000..e7ae108
--- /dev/null
+++ b/tests/testthat/test-qislimmer.R
@@ -0,0 +1,33 @@
+# REQUIRE TEST for qi_slimmer --------------------------------------------------
+test_that('REQUIRE TEST for qi_slimmer', {
+  qi.full.interval <- zelig(Petal.Width ~ Petal.Length + Species,
+                            data = iris, model = "ls") %>%
+    setx(Petal.Length = 2:4, Species = "setosa") %>%
+    sim() %>%
+    zelig_qi_to_df()
+
+  expect_equal(nrow(qi_slimmer(qi.full.interval)), 3)
+  expect_equal(nrow(qi_slimmer(qi.full.interval, qi_type = 'pv')), 3)
+  expect_equal(nrow(qi_slimmer(qi.full.interval, ci = 90)), 3)
+})
+
+# FAIL TEST for qi_slimmer --------------------------------------------------
+test_that('REQUIRE TEST for qi_slimmer', {
+  qi.full.interval <- zelig(Petal.Width ~ Petal.Length + Species,
+                            data = iris, model = "ls") %>%
+    setx(Petal.Length = 2:4, Species = "setosa") %>%
+    sim() %>%
+    zelig_qi_to_df()
+
+    expect_error(qi_slimmer(qi.full.interval, qi_type = 'TEST'))
+    expect_error(qi_slimmer(qi.full.interval, ci = 900),
+                '900 will not produce a valid central interval.')
+
+    z <- zelig(Petal.Width ~ Petal.Length + Species, data = iris, model = "ls")
+    expect_error(qi_slimmer(z),
+                'df must be a data frame created by zelig_qi_to_df.')
+    df_test <- data.frame(a = 1, b = 2)
+    expect_error(qi_slimmer(df_test),
+                 'The data frame does not appear to have been created by zelig_qi_to_df.')
+})
+
diff --git a/tests/testthat/test-quantile.R b/tests/testthat/test-quantile.R
new file mode 100644
index 0000000..5eddc6e
--- /dev/null
+++ b/tests/testthat/test-quantile.R
@@ -0,0 +1,44 @@
+# REQUIRE TEST quantile regression doc example ---------------------------------
+test_that("REQUIRE TEST quantile regression doc example", {
+    library(quantreg)
+    library(dplyr)
+    data("stackloss")
+
+    z.out1 <- zelig(stack.loss ~ Air.Flow + Water.Temp + Acid.Conc.,
+                    model = 'rq', data = stackloss)
+
+    z.out2 <- zelig(stack.loss ~ Air.Flow + Water.Temp + Acid.Conc.,
+                   model = 'rq', data = stackloss, tau = 0.5)
+    z.set2 <- setx(z.out2, Air.Flow = seq(50, 80, by = 10))
+    z.sim2 <- sim(z.set2)
+    expect_error(plot(z.sim2), NA)
+
+    z.out3 <- zelig(stack.loss ~ Air.Flow + Water.Temp + Acid.Conc.,
+                    model = 'rq', data = stackloss, tau = 0.25)
+    z.set3 <- setx(z.out3, Air.Flow = seq(50, 80, by = 10))
+    z.sim3 <- sim(z.set3)
+    expect_error(plot(z.sim3), NA)
+
+    expect_equivalent(coef(z.out1)[[1]], coef(z.out2)[[1]])
+
+    qr.out1 <- rq(stack.loss ~ Air.Flow + Water.Temp + Acid.Conc.,
+                  data = stackloss, tau = 0.5)
+    expect_equivalent(coef(z.out1)[[2]], coef(qr.out1)[[2]])
+
+    expect_error(zelig(stack.loss ~ Air.Flow + Water.Temp + Acid.Conc.,
+                    model = 'rq', data = stackloss, tau = c(0.25, 0.75)),
+                 'tau argument only accepts 1 value.\nZelig is using only the first value.')
+})
+
+# REQUIRE TEST quantile regression with Amelia imputed data --------------------
+test_that('REQUIRE TEST quantile regression with Amelia imputed data',{
+    library(Amelia)
+    library(dplyr)
+
+    data(africa)
+    a.out <- amelia(x = africa, cs = "country", ts = "year", logs = "gdp_pc")
+    z.out <- zelig(gdp_pc ~ trade + civlib, model = "rq", data = a.out)
+
+    expect_error(z.out %>% setx %>% sim %>% plot, NA)
+})
+
diff --git a/tests/testthat/test-relogit.R b/tests/testthat/test-relogit.R
new file mode 100755
index 0000000..c2dd55b
--- /dev/null
+++ b/tests/testthat/test-relogit.R
@@ -0,0 +1,121 @@
+# REQUIRE TEST Monte Carlo test relogit ----------------------------------------
+
+test_that('REQUIRE TEST relogit Monte Carlo', {
+    z <- zrelogit$new()
+    test.relogit <- z$mcunit(alpha = 0.1, b0 = -4, nsim = 1000, plot = FALSE)
+    expect_true(test.relogit)
+})
+
+
+# REQUIRE TEST relogit vignette example ------------------------------------------------
+
+test_that('REQUIRE TEST relogit vignette example', {
+    data(mid)
+    z.out1 <- zelig(conflict ~ major + contig + power + maxdem + mindem + years,
+                    data = mid, model = "relogit", tau = 1042/303772)
+    x.out1 <- setx(z.out1)
+    s.out1 <- sim(z.out1, x = x.out1)
+    sims <- zelig_qi_to_df(s.out1)
+
+    expect_lt(mean(sims$predicted_value), 0.1)
+})
+
+# REQUIRE TEST relogit vignette logs transformation ----------------------------
+
+test_that('REQUIRE TEST relogit vignette example', {
+    data(mid)
+    z.out1 <- zelig(conflict ~ major + contig + power + maxdem + mindem + years,
+                    data = mid, model = "relogit", tau = 1042/303772)
+
+    z.outlog <- zelig(conflict ~ major + contig + log(power) + maxdem + mindem +
+                        years,
+                    data = mid, model = "relogit", tau = 1042/303772)
+    x.outlog <- setx(z.outlog, power = log(0.5))
+
+    expect_false(coef(x.outlog)['power'] == coef(z.out1)['power'])
+})
+
+# FAIL TEST relogit with tau <= 0 ----------------------------------------------
+test_that('FAIL TEST relogit with tau <= 0', {
+    data(mid)
+    expect_error(zelig(conflict ~ major + contig + power + maxdem + mindem +
+                           years,
+                    data = mid, model = "relogit", tau = -0.1),
+                 "tau is the population proportion of 1's for the response variable.\nIt must be > 0.")
+})
+
+# REQUIRE TEST relogit with tau range ------------------------------------------
+test_that('REQUIRE TEST relogit with tau range', {
+    data(mid)
+    expect_error(z.out <- zelig(conflict ~ major + contig + power + maxdem +
+                                    mindem + years,
+                    data = mid, model = "relogit", tau = c(0.002, 0.005)),
+                 "tau must be a vector of length less than or equal to 1. For multiple taus, estimate models individually.")
+})
+
+# REQUIRE TEST relogit works with predict --------------------------------------
+test_that("REQUIRE TEST relogit works with predict", {
+    data(mid)
+    x <- zelig(conflict ~ major, data = mid, model = "relogit",
+               tau = 1042/303772)
+    x <- from_zelig_model(x)
+    expect_warning(predict(x, newdata = mid[1, ]), NA)
+})
+
+# REQUIRE TEST relogit follows ISQ (2001, eq. 11) ------------------------------
+test_that("REQUIRE TEST relogit follows ISQ (2001, eq. 11)", {
+    data(mid)
+    z.out1 <- zelig(conflict ~ major + contig + power + maxdem + mindem + years,
+                    data = mid, model = "relogit", tau = 1042/303772,
+                    cite = FALSE, case.control = "weighting")
+    expect_equal(round(coef(z.out1)[[2]], 6), 1.672177)
+    expect_equal(colnames(summary(z.out1)$coefficients)[2],
+                     "Std. Error (robust)")
+
+    vcov_z.out1 <- vcov(z.out1)
+    z.out.vcov_not_robust <- z.out1
+    z.out.vcov_not_robust$robust.se <- FALSE
+    expect_false(round(vcov_z.out1[[1]][1]) ==
+                     round(vcov(z.out.vcov_not_robust)[[1]][1]))
+
+    # Not adequately tested !!!
+    z.out1 %>% setx() %>% sim() %>% plot()
+    z.out.vcov_not_robust %>% setx() %>% sim() %>% plot()
+})
+
+# REQUIRE TEST Odds Ratio summary ----------------------------------------------
+test_that('REQUIRE TEST Odds Ratio summary', {
+    data(mid)
+    z.out1 <- zelig(conflict ~ major + contig + power + maxdem + mindem + years,
+                    data = mid, model = "relogit", tau = 1042/303772,
+                    cite = FALSE, case.control = "weighting")
+
+    sum_weighting <- summary(z.out1, odds_ratios = FALSE)
+    sum_or_weighting <- summary(z.out1, odds_ratios = TRUE)
+    expect_false(sum_weighting$coefficients[1, 1] ==
+                     sum_or_weighting$coefficients[1, 1])
+    expect_equal(colnames(sum_or_weighting$coefficients)[2],
+                 "Std. Error (OR, robust)")
+
+    z.out2 <- zelig(conflict ~ major + contig + power + maxdem + mindem + years,
+                    data = mid, model = "relogit", tau = 1042/303772,
+                    cite = FALSE, case.control = "prior")
+
+    sum_weighting2 <- summary(z.out2, odds_ratios = FALSE)
+    sum_or_weighting2 <- summary(z.out2, odds_ratios = TRUE)
+    expect_equal(colnames(sum_or_weighting2$coefficients)[2],
+                 "Std. Error (OR)")
+})
+
+# REQUIRE TEST get_predict takes type = "response" ----------------------------
+test_that('REQUIRE TEST get_predict takes type = "response"', {
+    data(mid)
+    z.out1 <- zelig(conflict ~ major + contig + power + maxdem + mindem + years,
+                    data = mid, model = "relogit", tau = 1042/303772)
+
+    prob1 <- z.out1$get_predict(type = "response")
+    expect_gt(min(sapply(prob1, min)), 0)
+
+    prob2 <- predict(z.out1, type = "response")
+    expect_gt(min(sapply(prob2, min)), 0)
+})
diff --git a/tests/testthat/test-survey.R b/tests/testthat/test-survey.R
new file mode 100644
index 0000000..8100c47
--- /dev/null
+++ b/tests/testthat/test-survey.R
@@ -0,0 +1,56 @@
+# REQUIRE TEST survey weights correctly passed  --------------------------------
+
+test_that('REQUIRE TEST survey weights correctly passed', {
+    data(api, package = "survey")
+
+    z.out1 <- zelig(api00 ~ meals + yr.rnd, model = "normal.survey",
+                    id = ~dnum, weights = 'pw', data = apiclus1, fpc = ~fpc)
+
+    z.out2 <- zelig(api00 ~ meals + yr.rnd,
+                    model = "normal.survey",
+                    id = ~dnum, weights = ~pw, data = apiclus1, fpc = ~fpc)
+
+    z.out3 <- zelig(api00 ~ meals + yr.rnd, model = "normal.survey",
+                    id = ~dnum, weights = apiclus1$pw, data = apiclus1,
+                    fpc = ~fpc)
+
+    api_design <- svydesign(id = ~dnum, weights = ~pw, data = apiclus1,
+                            fpc = ~fpc )
+    model_glm <- svyglm(api00 ~ meals + yr.rnd, api_design,
+                        family = gaussian("identity"))
+
+    expect_equal(coef(z.out1), coef(z.out2))
+    expect_equal(coef(z.out1), coef(z.out3))
+    expect_equal(coef(z.out1), coef(model_glm))
+})
+
+# REQUIRE TEST survey weights correctly passed  --------------------------------
+
+test_that('REQUIRE TEST survey glm with no weights', {
+    data(api, package = "survey")
+
+    z.out1_no_weights <- zelig(api00 ~ meals + yr.rnd, model = "normal.survey",
+                    id = ~dnum, data = apiclus1, fpc = ~fpc)
+
+    api_design_no_weights <- svydesign(id = ~dnum, data = apiclus1, fpc = ~fpc,
+                                       weights = ~pw )
+    model_glm_no_weights <- svyglm(api00 ~ meals + yr.rnd,
+                                   api_design_no_weights,
+                                   family = gaussian("identity"))
+
+    expect_equal(coef(z.out1_no_weights), coef(model_glm_no_weights))
+})
+
+
+# REQUIRE TEST repweights ------------------------------------------------------
+test_that('REQUIRE TEST repweights', {
+    ### ----- NEED TO THINK OF A BETTER TEST ------ ##
+    data(scd, package = "survey")
+
+    BRRrep <- 2*cbind(c(1,0,1,0,1,0), c(1,0,0,1,0,1),
+                      c(0,1,1,0,0,1), c(0,1,0,1,1,0))
+
+    z.outREP <- zelig(alive ~ arrests , model = "normal.survey",
+                    repweights = BRRrep, type = "BRR",
+                    data = scd, na.action = NULL)
+})
diff --git a/tests/testthat/test-tobit.R b/tests/testthat/test-tobit.R
new file mode 100644
index 0000000..2f7a40b
--- /dev/null
+++ b/tests/testthat/test-tobit.R
@@ -0,0 +1,22 @@
+# REQUIRE TEST Monte Carlo test tobit ---------------------------------------------
+
+test_that('REQUIRE TEST tobit Monte Carlo', {
+    z <- ztobit$new()
+    test.tobit <- z$mcunit(minx = 0, plot = FALSE)
+    expect_true(test.tobit)
+})
+
+# REQUIRE TEST update tobit formula --------------------------------------------
+test_that('REQUIRE TEST update tobit formula', {
+    data(tobin)
+    z5<-ztobit$new()
+    z5$zelig(durable ~ age + quant, data = tobin)
+
+    z5.1_coefs <- coef(z5)
+
+    controls <- ~ quant
+    z5$zelig(formula = update(controls, durable ~ age + .), data = tobin)
+
+    expect_equal(z5.1_coefs, coef(z5))
+})
+
diff --git a/tests/testthat/test-tobitbayes.R b/tests/testthat/test-tobitbayes.R
new file mode 100644
index 0000000..c4f298d
--- /dev/null
+++ b/tests/testthat/test-tobitbayes.R
@@ -0,0 +1,7 @@
+# REQUIRE TEST Monte Carlo test tobitbayes ---------------------------------------------
+
+test_that('REQUIRE TEST tobitbayes Monte Carlo', {
+    z <- ztobitbayes$new()
+    test.tobitbayes <- z$mcunit(nsim=2000, ci=0.99, minx=0, plot = FALSE)
+    expect_true(test.tobitbayes)
+})
\ No newline at end of file
diff --git a/tests/testthat/test-utils.R b/tests/testthat/test-utils.R
new file mode 100644
index 0000000..ebe6f80
--- /dev/null
+++ b/tests/testthat/test-utils.R
@@ -0,0 +1,84 @@
+context('test-utils.R')
+
+test_that("REQUIRE TEST Median()", {
+    input <- c(1, 2, 3)
+    expected <- 2
+    actual <- Median(input)
+    expect_equal(actual, expected)
+})
+
+# REQUIRE TEST for to_zelig_mi -------------------------------------------------
+# test_that('REQUIRE TEST for to_zelig_mi', {
+#     set.seed(123)
+#     n <- 100
+#     x1 <- runif(n)
+#     x2 <- runif(n)
+#     y <- rnorm(n)
+#     data.1 <- data.frame(y = y, x = x1)
+#     data.2 <- data.frame(y = y, x = x2)
+
+#     mi.out <- to_zelig_mi(data.1, data.2)
+#     z.out.mi <- zelig(y ~ x, model = "ls", data = mi.out)
+
+#     expect_error(summary(z.out.mi), NA)
+#     expect_equivalent(round(as.numeric(z.out.mi$get_coef()[[1]][2]), 3), 0.1)
+#     expect_equivalent(round(as.numeric(combine_coef_se(z.out.mi)[[1]][1]), 3),
+#                             -0.122)
+
+#     z.out.mi.boot <- zelig(y ~ x, model = "ls", data = mi.out, bootstrap = 20)
+#     expect_equal(round(as.numeric(combine_coef_se(z.out.mi.boot)[[1]][1]), 3),
+#                     -0.094)
+
+#     expect_error(z.out.log <- zelig(y ~ log(x), model = "ls", data = mi.out),
+#                  NA)
+
+#     expect_error(z.out.log10 <- zelig(y ~ log(x, base = 10), model = "ls",
+#                                       data = mi.out), NA)
+# })
+
+# REQUIRE TEST for combine_coef_se for bootstrapped ----------------------------
+# test_that('REQUIRE TEST for combine_coef_se for bootstrapped', {
+#     set.seed(123)
+#     n <- 100
+#     data.1 <- data.frame(y = rnorm(n), x = runif(n))
+#     z.out.boot <- zelig(y ~ x, model = "ls", data = data.1, bootstrap = 20)
+
+#     expect_error(summary(z.out.boot), NA)
+#     expect_equal(round(as.numeric(combine_coef_se(z.out.boot)[[1]][1]), 3),
+#                  0.007)
+#     summary(z.out.boot, bagging = TRUE)
+#     expect_equal(round(as.numeric(
+#                     combine_coef_se(z.out.boot, bagging = TRUE)[[1]][1]), 3),
+#                  -0.052)
+
+#     z5_ls <- zelig(Fertility ~ Education, model = "ls", data = swiss)
+#     expect_equal(length(combine_coef_se(z5_ls)), 3)
+# })
+
+# REQUIRE TEST for to_zelig_mi -------------------------------------------------
+test_that('REQUIRE TEST for to_zelig_mi -- with list of data.frames', {
+    set.seed(123)
+    n <- 100
+    x1 <- runif(n)
+    x2 <- runif(n)
+    y <- rnorm(n)
+    data.1 <- data.frame(y = y, x = x1)
+    data.2 <- data.frame(y = y, x = x2)
+    data_mi = list(data.1, data.2)
+
+    mi.out <- to_zelig_mi(data_mi)
+    z.out <- zelig(y ~ x, model = "ls", data = mi.out)
+
+    expect_equivalent(round(as.numeric(z.out$get_coef()[[1]][2]), 3), 0.1)
+})
+
+# FAIL TEST for to_zelig_mi ----------------------------------------------------
+test_that('FAIL TESTS for to_zelig_mi', {
+    x <- 100
+    expect_error(to_zelig_mi(x))
+})
+
+# FAIL TEST for or_summary -----------------------------------------------------
+test_that("FAIL TEST for or_summary", {
+    expect_error(or_summary(1:10), "obj must be of summary.glm class.")
+})
diff --git a/tests/testthat/test-weibull.R b/tests/testthat/test-weibull.R
new file mode 100644
index 0000000..6f433bd
--- /dev/null
+++ b/tests/testthat/test-weibull.R
@@ -0,0 +1,7 @@
+# REQUIRE TEST Monte Carlo weibull ---------------------------------------------
+
+test_that('REQUIRE TEST weibull Monte Carlo', {
+    z <- zweibull$new()
+    test.weibull<-z$mcunit(minx = 2, maxx = 3, nsim = 2000, alpha = 1.5, b0 = -1, b1 = 2, ci = 0.99, plot = FALSE)
+    expect_true(test.weibull)
+})
\ No newline at end of file
diff --git a/tests/testthat/test-weights.R b/tests/testthat/test-weights.R
new file mode 100644
index 0000000..a200111
--- /dev/null
+++ b/tests/testthat/test-weights.R
@@ -0,0 +1,35 @@
+# REQUIRE TEST weighting ---------------------------------------------
+
+test_that('REQUIRE TEST weighting', {
+
+	set.seed(123)
+	x <- runif(90)
+	y <- c( 2*x[1:45], -3*x[46:90] ) + rnorm(90)
+	z <- as.numeric(y>0)
+	w1 <- c(rep(1.8, 45), rep(0.2,45))
+	mydata <- data.frame(z,y,x,w1)
+
+	w2 <- rep(c(1.8,0.2), 45)
+
+	z1.out <- zelig( y ~ x, cite = FALSE, model = "ls", weights = "w1",
+	                 data = mydata)
+	expect_equivalent(length(z1.out$get_coef()[[1]]),2)
+
+	z2.out <- zelig( y ~ x, cite=FALSE, model="ls", weights=w2, data=mydata)
+	expect_equivalent(length(z2.out$get_coef()[[1]]),2)
+
+	z3.out <- zls$new()
+	expect_warning(z3.out$zelig( y ~ x, weights="noSuchName", data=mydata))
+
+	z4.out <- zls$new()
+	expect_warning(z4.out$zelig( y ~ x, weights=w2[1:10], data=mydata))
+
+	continuous.weights <- rep(x=c(0.6, 1, 1.4), times=30)
+	z5.out <- zelig( z ~ x, model="logit", weights=continuous.weights, data=mydata)
+	expect_equivalent(length(z5.out$get_coef()[[1]]),2)
+
+	integer.weights <- rep(x=c(0, 1, 2), times=30)
+	z6.out <- zelig( z ~ x, model="logit", weights=integer.weights, data=mydata)
+	expect_equivalent(length(z6.out$get_coef()[[1]]),2)
+
+})
diff --git a/tests/testthat/test-wrappers.R b/tests/testthat/test-wrappers.R
new file mode 100644
index 0000000..4c2deea
--- /dev/null
+++ b/tests/testthat/test-wrappers.R
@@ -0,0 +1,85 @@
+# Zelig 4 ls wrapper working ---------------------------------------------------
+
+test_that('ls wrapper continuous covar -- quickstart (Zelig 4 syntax)', {
+    z4 <- zelig(Fertility ~ Education, data = swiss, model = 'ls', cite = FALSE)
+
+    # extract education coefficient parameter estimate and compare to reference
+    expect_equivalent(round(as.numeric(z4$get_coef()[[1]][2]), 7), -0.8623503)
+})
+
+# Test missing model argument error---------------------------------------------
+
+test_that('missing model argument error', {
+    expect_error(zelig(Fertility ~ Education, data = swiss),
+               'Estimation model type not specified.\nSelect estimation model type with the model argument.'
+  )
+})
+
+# Test non-supported model type error ------------------------------------------
+
+test_that('non-supported model type error', {
+    expect_error(zelig(Fertility ~ Education, data = swiss, model = 'TEST'),
+                 'TEST is not a supported model type'
+  )
+})
+
+# REQUIRE TEST wrapper setx ----------------------------------------------------
+
+test_that('REQUIRE TEST wrapper setx', {
+    z4 <- zelig(Fertility ~ Education, data = swiss, model = 'ls')
+
+    z4_set <- setx(z4)
+    z4_set_vector <- round(as.vector(unlist(z4_set$setx.out)))
+    expect_equivalent(z4_set_vector, c(1, 1, 11))
+})
+
+# REQUIRE TEST wrapper setx1 ----------------------------------------------------
+
+test_that('REQUIRE TEST wrapper setx1', {
+zpipe <- zelig(Fertility ~ Education, data = swiss, model = 'ls') %>%
+                setx(z4, Education = 10) %>%
+                setx1(z4, Education = 30) %>%
+                sim()
+    expect_equal(length(zpipe$sim.out), 2)
+
+})
+
+# FAIL TEST non-zelig objects --------------------------------------------------
+test_that('setx and sim non-zelig object fail', {
+    expect_error(setx('TEST'), 'Not a Zelig object and not convertible to one.')
+    expect_error(sim('TEST'), 'Not a Zelig object.')
+})
+
+# REQUIRE TEST sim wrapper minimal working --------------------------------------
+test_that('REQUIRE TEST sim wraper minimal working', {
+    z5 <- zls$new()
+    z5 <- zelig(Fertility ~ Education, data = swiss, model = 'ls')
+    set_x <- setx(z5, Education = 5)
+
+    zsimwrap <- sim(z5, x = set_x, num = 10)
+    expect_equal(length(zsimwrap$get_qi()), 10)
+    expect_equal(length(zsimwrap$get_qi()), length(get_qi(zsimwrap)))
+
+    z5$setx(Education = 5)
+    zsimwrap <- sim(z5, num = 10)
+    expect_equal(length(zsimwrap$get_qi()), 10)
+})
+
+# REQUIRE TEST ATT wrapper -----------------------------------------------------
+test_that('REQUIRE TEST ATT wrapper', {
+    data(sanction)
+    # no wrapper
+    zqi.out <- zelig(num ~ target + coop + mil, model = "poisson",
+                     data = sanction)
+    zqi.out$ATT(treatment = "mil")
+    my.att <- zqi.out$get_qi(qi = "ATT", xvalue = "TE")
+
+    # with wrapper
+    library(dplyr)
+
+    z.att <- zelig(num ~ target + coop + mil, model = "poisson",
+                   data = sanction) %>%
+             ATT(treatment = "mil") %>%
+             get_qi(qi = "ATT", xvalue = "TE")
+    expect_equal(length(my.att), length(z.att))
+})
diff --git a/tests/testthat/test-zelig.R b/tests/testthat/test-zelig.R
new file mode 100644
index 0000000..1219c5b
--- /dev/null
+++ b/tests/testthat/test-zelig.R
@@ -0,0 +1,326 @@
+#### Integration tests for the Zelig estimate, set, sim, plot workflow      ####
+
+
+# FAIL TEST sim workflow -------------------------------------------------------
+test_that('FAIL TEST sim method warning if insufficient inputs', {
+  z5 <- zls$new()
+  expect_output(z5$zelig(Fertility ~ Education, model="ls", data = swiss),
+                 'Argument model is only valid for the Zelig wrapper, but not the Zelig method, and will be ignored.')
+
+  expect_warning(z5$sim(),
+                 'No simulations drawn, likely due to insufficient inputs.')
+
+  expect_error(z5$graph(), 'No simulated quantities of interest found.')
+})
+
+# FAIL TEST ci.plot range > length = 1 -----------------------------------------
+test_that('FAIL TEST ci.plot range > length = 1', {
+  z <- zls$new()
+  z$zelig(Fertility ~ Education, data = swiss)
+  expect_warning(z$setrange(Education = 5),
+                 'Only one fitted observation provided to setrange.\nConsider using setx instead.')
+
+  z$sim()
+  expect_error(z$graph(),
+               'Simulations for more than one fitted observation are required.')
+
+  expect_warning(z$setrange1(Education = 5),
+                 'Only one fitted observation provided to setrange.\nConsider using setx instead.')
+  expect_error(z$graph(),
+               'Simulations for more than one fitted observation are required.')
+})
+
+# REQUIRE TEST for by estimation workflow --------------------------------------
+test_that('REQUIRE TEST for by estimation workflow', {
+  # Majority Catholic dummy
+  swiss$maj_catholic <- cut(swiss$Catholic, breaks = c(0, 51, 100))
+
+  z5 <- zls$new()
+  z5$zelig(Fertility ~ Education, data = swiss, by = 'maj_catholic')
+  z5$setrange(Education = 5:15)
+  z5$sim()
+
+  expect_error(z5$graph(), NA)
+})
+
+# FAIL TEST for get_qi when applied to an object with no simulations ------------
+test_that('FAIL TEST for get_qi when applied to an object with no simulations', {
+    z <- zls$new()
+    z$zelig(Fertility ~ Education, data = swiss)
+    expect_error(z$get_qi(), 'No simulated quantities of interest found.')
+})
+
+# FAIL TEST for get_qi when unsupported qi supplied ----------------------------
+test_that('FAIL TEST for get_qi when unsupported qi supplied', {
+    z5 <- zls$new()
+    z5$zelig(Fertility ~ Education, data = swiss)
+    z5$setrange(Education = 5:15)
+    z5$sim()
+    expect_error(z5$get_qi(qi = "fa", xvalue = "range"), 'qi must be ev or pv.')
+})
+
+# FAIL TEST for estimation model failure ---------------------------------------
+test_that('FAIL TEST for estimation model failure', {
+  no_vary_df <- data.frame(y = rep(1, 10), x = rep(2, 10))
+  z <- zarima$new()
+  expect_error(z$zelig(y ~ x, data = no_vary_df),
+               'Dependent variable does not vary for at least one of the cases.')
+  expect_error(summary(z), 'Zelig model has not been estimated.')
+})
+
+# REQUIRE TEST for sim num argument --------------------------------------------
+test_that('REQUIRE TEST for sim num argument', {
+  z5 <- zls$new()
+  z5$zelig(Fertility ~ Education, data = swiss)
+  z5$setx(Education = 5)
+
+  z5$sim()
+  expect_equal(length(z5$get_qi()), 1000)
+
+  z5$sim(num = 10) # Look into unexpected behaviour if sim order is reversed
+  expect_equal(length(z5$get_qi()), 10)
+})
+
+# REQUIRE TEST from_zelig_model returns expected fitted model object -----------------
+test_that('REQUIRE TEST from_zelig_model returns expected fitted model object', {
+  z5 <- zls$new()
+  z5$zelig(Fertility ~ Education, data = swiss)
+  model_object <- z5$from_zelig_model()
+  expect_is(model_object, class = 'lm')
+  expect_equal(as.character(model_object$call[1]), 'lm')
+})
+
+# REQUIRE TEST from_zelig_model returns each fitted model object from mi -------------
+test_that('REQUIRE TEST from_zelig_model returns each fitted model object from mi', {
+  set.seed(123)
+  n <- 100
+  x1 <- runif(n)
+  x2 <- runif(n)
+  y <- rnorm(n)
+  data.1 <- data.frame(y = y, x = x1)
+  data.2 <- data.frame(y = y, x = x2)
+
+  mi.out <- to_zelig_mi(data.1, data.2)
+  z.out <- zelig(y ~ x, model = "ls", data = mi.out)
+  model_list <- z.out$from_zelig_model()
+  expect_is(model_list, class = 'list')
+  expect_equal(as.character(model_list[[2]]$call[1]), 'lm')
+})
+
+# REQUIRE TEST functioning simparam with by and ATT ----------------------------
+test_that('REQUIRE TEST functioning simparam with by and ATT', {
+  set.seed(123)
+  n <- 100
+  xx <- rbinom(n = n, size = 1, prob = 0.3)
+  zz <- runif(n)
+  ss <- runif(n)
+  rr <- rbinom(n, size = 1, prob = 0.5)
+  mypi <- 1/(1 + exp(-xx -3*zz -0.5))
+  yb <- rbinom(n, size = 1, prob = mypi)
+  data <- data.frame(rr, ss, xx, zz, yb)
+
+  zb.out <- zlogit$new()
+  zb.out$zelig(yb ~ xx + zz, data = data, by = "rr")
+
+  zb.out$ATT(treatment = "xx")
+  out <- zb.out$get_qi(qi = "ATT", xvalue = "TE")
+  expect_equal(length(out), 1000)
+})
+
+# REQUIRE TEST getters values and dimensions and plot does not fail-------------
+test_that("REQUIRE TEST getters values and dimensions and plot does not fail",
+{
+    set.seed(123)
+    n <- 1000
+    myseq <- 1:n
+    x <- myseq/n
+    y <- x + (-1)^(myseq) * 0.1
+    mydata <- data.frame(y = y, x = x)
+    mydata2 <- data.frame(y = y, x = x + 2)
+    z.out <- zelig(y ~ x, model = "ls", data = mydata)
+
+    expect_equivalent(round(as.numeric(z.out$get_coef()[[1]]), 2), c(0, 1))
+    expect_equivalent(length(z.out$get_predict()[[1]]), n)
+    expect_equivalent(length(z.out$get_fitted()[[1]]), n)
+    expect_equivalent(dim(z.out$get_vcov()[[1]]), c(2, 2))
+
+    z.out$setx(x = 0)
+    z.out$setx1(x = 1)
+    show.setx <- summary(z.out)
+    z.out$sim()
+    show.sim <- summary(z.out)
+
+    expect_equivalent(length(z.out$get_qi(qi = "ev", xvalue = "x")), n)
+    expect_equivalent(round(mean(z.out$get_qi(qi = "ev", xvalue = "x")),
+                            2), 0)
+    expect_equivalent(length(z.out$get_qi(qi = "ev", xvalue = "x1")),
+                      n)
+    expect_equivalent(round(mean(z.out$get_qi(qi = "ev", xvalue = "x1")),
+                            2), 1)
+
+    expect_equivalent(length(z.out$get_qi(qi = "pv", xvalue = "x")), n)
+    expect_equivalent(round(mean(z.out$get_qi(qi = "pv", xvalue = "x")),
+                            2), 0)
+    expect_equivalent(length(z.out$get_qi(qi = "pv", xvalue = "x1")),
+                      n)
+    expect_equivalent(round(mean(z.out$get_qi(qi = "pv", xvalue = "x1")),
+                            2), 1)
+
+    expect_equivalent(length(z.out$get_qi(qi = "fd", xvalue = "x1")),
+                      n)
+    expect_equivalent(round(mean(z.out$get_qi(qi = "fd", xvalue = "x1")), 2), 1)
+
+    expect_false(show.setx[[1]])
+    expect_false(show.sim[[1]])
+    expect_true(is.null(plot(z.out)))
+
+    xseq <- seq(from = 0, to = 1, length = 10)
+    z.out$setrange(x = xseq)
+    z.out$sim()
+
+    expect_true(is.null(plot(z.out)))
+
+    myref <- capture.output(z.out$references())
+    expect_equivalent(substr(myref[1], 1, 11), "R Core Team")
+
+    set.seed(123)
+    boot.out <- zelig(y ~ x, model = "ls", bootstrap = 20, data = mydata)
+    expect_equivalent(round(as.numeric(boot.out$get_coef()[[1]]), 2),
+                      c(0, 1))
+
+    show.boot <- summary(boot.out, bagging = TRUE)
+    expect_false(show.boot[[1]])
+
+    show.boot <- summary(boot.out, subset=2:3)
+    expect_false(show.boot[[1]])
+
+
+    set.seed(123)
+    mi.out <- zelig(y ~ x, model = "ls", data = mi(mydata, mydata2))
+    expect_equivalent(round(as.numeric(mi.out$get_coef()[[1]]), 2), c(0,
+                                                                     1))
+    expect_equivalent(round(as.numeric(mi.out$get_coef()[[2]]), 2), c(-2,
+                                                                     1))
+    expect_equivalent(length(mi.out$toJSON()), 1)
+
+    show.mi <- summary(mi.out)
+    expect_false(show.mi[[1]])
+    show.mi.subset <- summary(mi.out, subset = 1)
+    expect_false(show.mi.subset[[1]])
+})
+
+# REQUIRE TEST Binary QI's and ATT effects and BY argument-------------
+test_that('REQUIRE TEST Binary QIs and ATT effects and BY argument', {
+  set.seed(123)
+  # Simulate data
+  n <- 100
+  xx <- rbinom(n = n, size = 1, prob = 0.5)
+  zz <- runif(n)
+  ss <- runif(n)
+  rr <- rbinom(n, size = 1, prob = 0.5)
+  mypi <- 1/ (1+exp(-xx -3*zz -0.5))
+  yb <- rbinom(n, size = 1, prob = mypi)
+  data <- data.frame(rr, ss, xx, zz, yb)
+
+  # Estimate Zelig Logit models
+  zb.out <- zlogit$new()
+  zb.out$zelig(yb ~ xx + zz, data=data, by="rr")
+
+  show.logit <- summary(zb.out)
+  expect_false(show.logit[[1]])
+
+  zb2.out <- zlogit$new()
+  zb2.out$zelig(yb ~ xx, data=data)
+
+  zb3.out <- zlogit$new()
+  zb3.out$zelig(yb ~ xx + zz, data=data)
+
+  x.high <- setx(zb.out, xx = quantile(data$xx, prob = 0.75))
+  x.low <- setx(zb.out, xx = quantile(data$xx, prob = 0.25))
+  s.out <- sim(zb.out, x = x.high, x1 = x.low)
+
+  show.logit <- summary(s.out)
+  expect_false(show.logit[[1]])
+  expect_true(is.null(plot(s.out)))
+
+  # Method to calculate ATT
+  zb.out$ATT(treatment = "xx")
+
+  # Getter to extract ATT
+  out <- zb.out$get_qi(qi="ATT", xvalue="TE")
+  expect_equal(length(out), 1000)
+
+  # Plot ROC
+  expect_true(is.null(rocplot(zb2.out, zb3.out)))
+})
+
+# REQUIRE TEST for get_names method----------------------------------------------
+test_that('REQUIRE TEST for names field', {
+  z <- zls$new()
+  z$zelig(Fertility ~ Education, data = swiss)
+  expect_is(z$get_names(), class = 'character')
+  expect_false(is.null(names(z)))
+})
+
+# REQUIRE TEST for get_residuals method -----------------------------------------
+test_that('REQUIRE TEST for get_residuals method', {
+  z <- zls$new()
+  z$zelig(Fertility ~ Education, data = swiss)
+  expect_is(z$get_residuals(), class = 'list')
+  expect_false(is.null(residuals(z)))
+})
+
+# REQUIRE TEST for get_df_residual method -----------------------------------------
+test_that('REQUIRE TEST for get_df_residual method', {
+  z <- zls$new()
+  z$zelig(Fertility ~ Education, data = swiss)
+  expect_equal(length(z$get_df_residual()), 1)
+  expect_equal(length(df.residual(z)), 1)
+})
+
+# REQUIRE TEST for get_model_data method ---------------------------------------
+test_that('REQUIRE TEST for get_model_data method', {
+  z <- zls$new()
+  z$zelig(Fertility ~ Education, data = swiss)
+  expect_is(z$get_model_data(), class = 'data.frame')
+})
+
+# REQUIRE TEST for get_pvalue method ---------------------------------------
+test_that('REQUIRE TEST for get_pvalue', {
+  z <- zls$new()
+  z$zelig(Fertility ~ Education, data = swiss)
+  expect_is(z$get_pvalue()[[1]], class = 'numeric')
+  expect_equal(z$get_pvalue()[[1]], get_pvalue(z)[[1]])
+})
+
+# REQUIRE TEST for get_se method ---------------------------------------
+test_that('REQUIRE TEST for get_se', {
+  z <- zls$new()
+  z$zelig(Fertility ~ Education, data = swiss)
+  expect_is(z$get_se()[[1]], class = 'numeric')
+  expect_equal(z$get_se()[[1]], get_se(z)[[1]])
+})
+
+# REQUIRE TEST setx with logical covariates ------------------------------------
+test_that('REQUIRE TEST setx with logical covariates', {
+  swiss$maj_catholic <- cut(swiss$Catholic, breaks = c(0, 51, 100))
+  swiss$maj_catholic_logical <- FALSE
+  swiss$maj_catholic_logical[swiss$maj_catholic == '(51,100]'] <- TRUE
+  z5l <- zls$new()
+  z5l$zelig(Fertility ~ Education + maj_catholic_logical, data = swiss)
+  z5l$setx(maj_catholic_logical = TRUE)
+  expect_is(z5l$setx.out$x, class = c("rowwise_df", "tbl_df", "tbl",
+                                        "data.frame"))
+})
+
+# REQUIRE TESTS for standard R methods with zelig models -----------------------
+test_that('REQUIRE TESTS for standard R methods with zelig models', {
+    z5 <- zls$new()
+    z5$zelig(Fertility ~ Education, data = swiss)
+
+    expect_equal(length(coefficients(z5)), length(coef(z5)), 2)
+    expect_equal(nrow(vcov(z5)[[1]]), 2)
+    expect_equal(length(fitted(z5)[[1]]), 47)
+    expect_equal(length(predict(z5)[[1]]), 47)
+})
+

Debdiff

[The following lists of changes regard files as different if they have different names, permissions or owners.]

Files in second set of .debs but not in first

-rw-r--r--  root/root   /usr/lib/R/site-library/Zelig/CITATION
-rw-r--r--  root/root   /usr/lib/R/site-library/Zelig/DESCRIPTION
-rw-r--r--  root/root   /usr/lib/R/site-library/Zelig/INDEX
-rw-r--r--  root/root   /usr/lib/R/site-library/Zelig/JSON/zelig5models.json
-rw-r--r--  root/root   /usr/lib/R/site-library/Zelig/Meta/Rd.rds
-rw-r--r--  root/root   /usr/lib/R/site-library/Zelig/Meta/data.rds
-rw-r--r--  root/root   /usr/lib/R/site-library/Zelig/Meta/features.rds
-rw-r--r--  root/root   /usr/lib/R/site-library/Zelig/Meta/hsearch.rds
-rw-r--r--  root/root   /usr/lib/R/site-library/Zelig/Meta/links.rds
-rw-r--r--  root/root   /usr/lib/R/site-library/Zelig/Meta/nsInfo.rds
-rw-r--r--  root/root   /usr/lib/R/site-library/Zelig/Meta/package.rds
-rw-r--r--  root/root   /usr/lib/R/site-library/Zelig/NAMESPACE
-rw-r--r--  root/root   /usr/lib/R/site-library/Zelig/NEWS.md
-rw-r--r--  root/root   /usr/lib/R/site-library/Zelig/R/Zelig
-rw-r--r--  root/root   /usr/lib/R/site-library/Zelig/R/Zelig.rdb
-rw-r--r--  root/root   /usr/lib/R/site-library/Zelig/R/Zelig.rdx
-rw-r--r--  root/root   /usr/lib/R/site-library/Zelig/data/CigarettesSW.tab.gz
-rw-r--r--  root/root   /usr/lib/R/site-library/Zelig/data/MatchIt.url.tab.gz
-rw-r--r--  root/root   /usr/lib/R/site-library/Zelig/data/PErisk.txt.gz
-rw-r--r--  root/root   /usr/lib/R/site-library/Zelig/data/SupremeCourt.txt.gz
-rw-r--r--  root/root   /usr/lib/R/site-library/Zelig/data/Weimar.txt.gz
-rw-r--r--  root/root   /usr/lib/R/site-library/Zelig/data/Zelig.url.tab.gz
-rw-r--r--  root/root   /usr/lib/R/site-library/Zelig/data/approval.tab.gz
-rw-r--r--  root/root   /usr/lib/R/site-library/Zelig/data/bivariate.tab.gz
-rw-r--r--  root/root   /usr/lib/R/site-library/Zelig/data/coalition.tab.gz
-rw-r--r--  root/root   /usr/lib/R/site-library/Zelig/data/coalition2.txt.gz
-rw-r--r--  root/root   /usr/lib/R/site-library/Zelig/data/eidat.txt.gz
-rw-r--r--  root/root   /usr/lib/R/site-library/Zelig/data/free1.tab.gz
-rw-r--r--  root/root   /usr/lib/R/site-library/Zelig/data/free2.tab.gz
-rw-r--r--  root/root   /usr/lib/R/site-library/Zelig/data/friendship.RData
-rw-r--r--  root/root   /usr/lib/R/site-library/Zelig/data/grunfeld.txt.gz
-rw-r--r--  root/root   /usr/lib/R/site-library/Zelig/data/hoff.tab.gz
-rw-r--r--  root/root   /usr/lib/R/site-library/Zelig/data/homerun.txt.gz
-rw-r--r--  root/root   /usr/lib/R/site-library/Zelig/data/immi1.tab.gz
-rw-r--r--  root/root   /usr/lib/R/site-library/Zelig/data/immi2.tab.gz
-rw-r--r--  root/root   /usr/lib/R/site-library/Zelig/data/immi3.tab.gz
-rw-r--r--  root/root   /usr/lib/R/site-library/Zelig/data/immi4.tab.gz
-rw-r--r--  root/root   /usr/lib/R/site-library/Zelig/data/immi5.tab.gz
-rw-r--r--  root/root   /usr/lib/R/site-library/Zelig/data/immigration.tab.gz
-rw-r--r--  root/root   /usr/lib/R/site-library/Zelig/data/klein.txt.gz
-rw-r--r--  root/root   /usr/lib/R/site-library/Zelig/data/kmenta.txt.gz
-rw-r--r--  root/root   /usr/lib/R/site-library/Zelig/data/macro.tab.gz
-rw-r--r--  root/root   /usr/lib/R/site-library/Zelig/data/mexico.tab.gz
-rw-r--r--  root/root   /usr/lib/R/site-library/Zelig/data/mid.tab.gz
-rw-r--r--  root/root   /usr/lib/R/site-library/Zelig/data/newpainters.txt.gz
-rw-r--r--  root/root   /usr/lib/R/site-library/Zelig/data/sanction.tab.gz
-rw-r--r--  root/root   /usr/lib/R/site-library/Zelig/data/seatshare.rda
-rw-r--r--  root/root   /usr/lib/R/site-library/Zelig/data/sna.ex.RData
-rw-r--r--  root/root   /usr/lib/R/site-library/Zelig/data/swiss.txt.gz
-rw-r--r--  root/root   /usr/lib/R/site-library/Zelig/data/tobin.txt.gz
-rw-r--r--  root/root   /usr/lib/R/site-library/Zelig/data/turnout.tab.gz
-rw-r--r--  root/root   /usr/lib/R/site-library/Zelig/data/voteincome.txt.gz
-rw-r--r--  root/root   /usr/lib/R/site-library/Zelig/help/AnIndex
-rw-r--r--  root/root   /usr/lib/R/site-library/Zelig/help/Zelig.rdb
-rw-r--r--  root/root   /usr/lib/R/site-library/Zelig/help/Zelig.rdx
-rw-r--r--  root/root   /usr/lib/R/site-library/Zelig/help/aliases.rds
-rw-r--r--  root/root   /usr/lib/R/site-library/Zelig/help/figures/example_plot_ci_plot-1.png
-rw-r--r--  root/root   /usr/lib/R/site-library/Zelig/help/figures/example_plot_graph-1.png
-rw-r--r--  root/root   /usr/lib/R/site-library/Zelig/help/figures/zelig.png
-rw-r--r--  root/root   /usr/lib/R/site-library/Zelig/help/paths.rds
-rw-r--r--  root/root   /usr/lib/R/site-library/Zelig/html/00Index.html
-rw-r--r--  root/root   /usr/lib/R/site-library/Zelig/html/R.css
-rw-r--r--  root/root   /usr/share/doc/r-cran-zeligchoice/tests/testthat/test-amelia.R
-rw-r--r--  root/root   /usr/share/doc/r-cran-zeligchoice/tests/testthat/test-arima.R.gz
-rw-r--r--  root/root   /usr/share/doc/r-cran-zeligchoice/tests/testthat/test-assertions.R
-rw-r--r--  root/root   /usr/share/doc/r-cran-zeligchoice/tests/testthat/test-bayesdiagnostics.R
-rw-r--r--  root/root   /usr/share/doc/r-cran-zeligchoice/tests/testthat/test-createJSON.R
-rw-r--r--  root/root   /usr/share/doc/r-cran-zeligchoice/tests/testthat/test-exp.R
-rw-r--r--  root/root   /usr/share/doc/r-cran-zeligchoice/tests/testthat/test-gamma.R
-rw-r--r--  root/root   /usr/share/doc/r-cran-zeligchoice/tests/testthat/test-gammasurvey.R
-rw-r--r--  root/root   /usr/share/doc/r-cran-zeligchoice/tests/testthat/test-interface.R.gz
-rw-r--r--  root/root   /usr/share/doc/r-cran-zeligchoice/tests/testthat/test-ivreg.R
-rw-r--r--  root/root   /usr/share/doc/r-cran-zeligchoice/tests/testthat/test-logit.R
-rw-r--r--  root/root   /usr/share/doc/r-cran-zeligchoice/tests/testthat/test-logitbayes.R
-rw-r--r--  root/root   /usr/share/doc/r-cran-zeligchoice/tests/testthat/test-logitsurvey.R
-rw-r--r--  root/root   /usr/share/doc/r-cran-zeligchoice/tests/testthat/test-lognom.R
-rw-r--r--  root/root   /usr/share/doc/r-cran-zeligchoice/tests/testthat/test-ls.R.gz
-rw-r--r--  root/root   /usr/share/doc/r-cran-zeligchoice/tests/testthat/test-matchit.R
-rw-r--r--  root/root   /usr/share/doc/r-cran-zeligchoice/tests/testthat/test-negbin.R
-rw-r--r--  root/root   /usr/share/doc/r-cran-zeligchoice/tests/testthat/test-normal-gee.R
-rw-r--r--  root/root   /usr/share/doc/r-cran-zeligchoice/tests/testthat/test-normal.R
-rw-r--r--  root/root   /usr/share/doc/r-cran-zeligchoice/tests/testthat/test-normalbayes.R
-rw-r--r--  root/root   /usr/share/doc/r-cran-zeligchoice/tests/testthat/test-normalsurvey.R
-rw-r--r--  root/root   /usr/share/doc/r-cran-zeligchoice/tests/testthat/test-plots.R
-rw-r--r--  root/root   /usr/share/doc/r-cran-zeligchoice/tests/testthat/test-poisson.R
-rw-r--r--  root/root   /usr/share/doc/r-cran-zeligchoice/tests/testthat/test-poissonbayes.R
-rw-r--r--  root/root   /usr/share/doc/r-cran-zeligchoice/tests/testthat/test-poissonsurvey.R
-rw-r--r--  root/root   /usr/share/doc/r-cran-zeligchoice/tests/testthat/test-probit.R
-rw-r--r--  root/root   /usr/share/doc/r-cran-zeligchoice/tests/testthat/test-probitbayes.R
-rw-r--r--  root/root   /usr/share/doc/r-cran-zeligchoice/tests/testthat/test-probitsurvey.R
-rw-r--r--  root/root   /usr/share/doc/r-cran-zeligchoice/tests/testthat/test-qislimmer.R
-rw-r--r--  root/root   /usr/share/doc/r-cran-zeligchoice/tests/testthat/test-quantile.R
-rw-r--r--  root/root   /usr/share/doc/r-cran-zeligchoice/tests/testthat/test-relogit.R.gz
-rw-r--r--  root/root   /usr/share/doc/r-cran-zeligchoice/tests/testthat/test-survey.R
-rw-r--r--  root/root   /usr/share/doc/r-cran-zeligchoice/tests/testthat/test-tobit.R
-rw-r--r--  root/root   /usr/share/doc/r-cran-zeligchoice/tests/testthat/test-tobitbayes.R
-rw-r--r--  root/root   /usr/share/doc/r-cran-zeligchoice/tests/testthat/test-utils.R
-rw-r--r--  root/root   /usr/share/doc/r-cran-zeligchoice/tests/testthat/test-weibull.R
-rw-r--r--  root/root   /usr/share/doc/r-cran-zeligchoice/tests/testthat/test-weights.R
-rw-r--r--  root/root   /usr/share/doc/r-cran-zeligchoice/tests/testthat/test-wrappers.R
-rw-r--r--  root/root   /usr/share/doc/r-cran-zeligchoice/tests/testthat/test-zelig.R.gz

Files in first set of .debs but not in second

-rw-r--r--  root/root   /usr/lib/R/site-library/ZeligChoice/DESCRIPTION
-rw-r--r--  root/root   /usr/lib/R/site-library/ZeligChoice/INDEX
-rw-r--r--  root/root   /usr/lib/R/site-library/ZeligChoice/JSON/zelig5choicemodels.json
-rw-r--r--  root/root   /usr/lib/R/site-library/ZeligChoice/Meta/Rd.rds
-rw-r--r--  root/root   /usr/lib/R/site-library/ZeligChoice/Meta/data.rds
-rw-r--r--  root/root   /usr/lib/R/site-library/ZeligChoice/Meta/demo.rds
-rw-r--r--  root/root   /usr/lib/R/site-library/ZeligChoice/Meta/features.rds
-rw-r--r--  root/root   /usr/lib/R/site-library/ZeligChoice/Meta/hsearch.rds
-rw-r--r--  root/root   /usr/lib/R/site-library/ZeligChoice/Meta/links.rds
-rw-r--r--  root/root   /usr/lib/R/site-library/ZeligChoice/Meta/nsInfo.rds
-rw-r--r--  root/root   /usr/lib/R/site-library/ZeligChoice/Meta/package.rds
-rw-r--r--  root/root   /usr/lib/R/site-library/ZeligChoice/NAMESPACE
-rw-r--r--  root/root   /usr/lib/R/site-library/ZeligChoice/NEWS.md
-rw-r--r--  root/root   /usr/lib/R/site-library/ZeligChoice/R/ZeligChoice
-rw-r--r--  root/root   /usr/lib/R/site-library/ZeligChoice/R/ZeligChoice.rdb
-rw-r--r--  root/root   /usr/lib/R/site-library/ZeligChoice/R/ZeligChoice.rdx
-rw-r--r--  root/root   /usr/lib/R/site-library/ZeligChoice/data/coalition.tab
-rw-r--r--  root/root   /usr/lib/R/site-library/ZeligChoice/data/sanction.tab
-rw-r--r--  root/root   /usr/lib/R/site-library/ZeligChoice/demo/demo-blogit.R
-rw-r--r--  root/root   /usr/lib/R/site-library/ZeligChoice/demo/demo-bprobit.R
-rw-r--r--  root/root   /usr/lib/R/site-library/ZeligChoice/demo/demo-mlogit.R
-rw-r--r--  root/root   /usr/lib/R/site-library/ZeligChoice/demo/demo-ologit.R
-rw-r--r--  root/root   /usr/lib/R/site-library/ZeligChoice/demo/demo-oprobit.R
-rw-r--r--  root/root   /usr/lib/R/site-library/ZeligChoice/help/AnIndex
-rw-r--r--  root/root   /usr/lib/R/site-library/ZeligChoice/help/ZeligChoice.rdb
-rw-r--r--  root/root   /usr/lib/R/site-library/ZeligChoice/help/ZeligChoice.rdx
-rw-r--r--  root/root   /usr/lib/R/site-library/ZeligChoice/help/aliases.rds
-rw-r--r--  root/root   /usr/lib/R/site-library/ZeligChoice/help/paths.rds
-rw-r--r--  root/root   /usr/lib/R/site-library/ZeligChoice/html/00Index.html
-rw-r--r--  root/root   /usr/lib/R/site-library/ZeligChoice/html/R.css
-rw-r--r--  root/root   /usr/share/doc/r-cran-zeligchoice/tests/testthat/test-mlogit.R
-rw-r--r--  root/root   /usr/share/doc/r-cran-zeligchoice/tests/testthat/test-ologit.R
-rw-r--r--  root/root   /usr/share/doc/r-cran-zeligchoice/tests/testthat/test-oprobit.R

Control files: lines which differ (wdiff format)

  • Depends: r-base-core (>= 4.2.2.20221110-2), 4.2.1-3), r-api-4.0, r-cran-dplyr, r-cran-survival, r-cran-aer, r-cran-amelia, r-cran-coda, r-cran-dplyr (>= 0.3.0.2), r-cran-formula, r-cran-geepack, r-cran-jsonlite, r-cran-sandwich, r-cran-mass, r-cran-vgam, r-cran-zelig (>= 5.1-1) r-cran-matchit, r-cran-maxlik, r-cran-mcmcpack, r-cran-quantreg, r-cran-survey, r-cran-vgam
  • Suggests: r-cran-ei, r-cran-eipack, r-cran-knitr, r-cran-zeligverse r-cran-rmarkdown

More details

Full run details