Bayesian MCMC spatial simultaneous autoregressive model estimation
SET_MCMC.Rd
The spBreg_lag
function is an early-release version of the Matlab Spatial Econometrics Toolbox function sar_g.m
, using drawing by inversion, and not accommodating heteroskedastic disturbances.
Usage
spBreg_lag(formula, data = list(), listw, na.action, Durbin, type,
zero.policy=NULL, control=list())
spBreg_sac(formula, data = list(), listw, listw2=NULL, na.action,
Durbin, type, zero.policy=NULL, control=list())
spBreg_err(formula, data = list(), listw, na.action, Durbin, etype,
zero.policy=NULL, control=list())
# S3 method for class 'MCMC_sar_G'
impacts(obj, ..., tr=NULL, listw=NULL, evalues=NULL, Q=NULL)
# S3 method for class 'MCMC_sem_G'
impacts(obj, ..., tr=NULL, listw=NULL, evalues=NULL, Q=NULL)
# S3 method for class 'MCMC_sac_G'
impacts(obj, ..., tr=NULL, listw=NULL, evalues=NULL, Q=NULL)
Arguments
- formula
a symbolic description of the model to be fit. The details of model specification are given for
lm()
- data
an optional data frame containing the variables in the model. By default the variables are taken from the environment which the function is called.
- listw, listw2
a
listw
object created for example bynb2listw
- na.action
a function (default
options("na.action")
), can also bena.omit
orna.exclude
with consequences for residuals and fitted values - in these cases the weights list will be subsetted to remove NAs in the data. It may be necessary to set zero.policy to TRUE because this subsetting may create no-neighbour observations. Note that only weights lists created without using the glist argument tonb2listw
may be subsetted.- Durbin
default FALSE (spatial lag model); if TRUE, full spatial Durbin model; if a formula object, the subset of explanatory variables to lag. From version 1.3-7, the presence of factors (categorical variables) in the Durbin term will give a warning, as it is as yet unknown how spatial lags of categorical variables should be interpreted.
- type, etype
(use the ‘Durbin=’ argument - retained for backwards compatibility only) default "lag", may be set to "mixed"; when "mixed", the lagged intercept is dropped for spatial weights style "W", that is row-standardised weights, but otherwise included; “Durbin” may be used instead of “mixed”
- zero.policy
default NULL, use global option value; if TRUE assign zero to the lagged value of zones without neighbours, if FALSE (default) assign NA
- control
list of extra control arguments - see section below
- obj
A spatial regression object
- ...
Arguments passed through to methods in the coda package
- tr
A vector of traces of powers of the spatial weights matrix created using
trW
, for approximate impact measures; if not given,listw
must be given for exact measures (for small to moderate spatial weights matrices); the traces must be for the same spatial weights as were used in fitting the spatial regression, and must be row-standardised- evalues
vector of eigenvalues of spatial weights matrix for impacts calculations
- Q
default NULL, else an integer number of cumulative power series impacts to calculate if
tr
is given
Control arguments
- tol.opt:
the desired accuracy of the optimization - passed to
optimize()
(default=square root of double precision machine tolerance, a larger root may be used needed, see help(boston) for an example)- fdHess:
default NULL, then set to (method != "eigen") internally; use
fdHess
to compute an approximate Hessian using finite differences when using sparse matrix methods; used to make a coefficient covariance matrix when the number of observations is large; may be turned off to save resources if need be- optimHess:
default FALSE, use
fdHess
from nlme, if TRUE, useoptim
to calculate Hessian at optimum- optimHessMethod:
default “optimHess”, may be “nlm” or one of the
optim
methods- compiled_sse:
default FALSE; logical value used in the log likelihood function to choose compiled code for computing SSE
- Imult:
default 2; used for preparing the Cholesky decompositions for updating in the Jacobian function
- super:
if NULL (default), set to FALSE to use a simplicial decomposition for the sparse Cholesky decomposition and method “Matrix_J”, set to
as.logical(NA)
for method “Matrix”, if TRUE, use a supernodal decomposition- cheb_q:
default 5; highest power of the approximating polynomial for the Chebyshev approximation
- MC_p:
default 16; number of random variates
- MC_m:
default 30; number of products of random variates matrix and spatial weights matrix
- spamPivot:
default “MMD”, alternative “RCM”
- in_coef
default 0.1, coefficient value for initial Cholesky decomposition in “spam_update”
- type
default “MC”, used with method “moments”; alternatives “mult” and “moments”, for use if
trs
is missing,trW
- correct
default TRUE, used with method “moments” to compute the Smirnov/Anselin correction term
- trunc
default TRUE, used with method “moments” to truncate the Smirnov/Anselin correction term
- SE_method
default “LU”, may be “MC”
- nrho
default 200, as in SE toolbox; the size of the first stage lndet grid; it may be reduced to for example 40
- interpn
default 2000, as in SE toolbox; the size of the second stage lndet grid
- small_asy
default TRUE; if the method is not “eigen”, use asymmetric covariances rather than numerical Hessian ones if n <= small
- small
default 1500; threshold number of observations for asymmetric covariances when the method is not “eigen”
- SElndet
default NULL, may be used to pass a pre-computed SE toolbox style matrix of coefficients and their lndet values to the "SE_classic" and "SE_whichMin" methods
- LU_order
default FALSE; used in “LU_prepermutate”, note warnings given for
lu
method- pre_eig
default NULL; may be used to pass a pre-computed vector of eigenvalues
- OrdVsign
default 1; used to set the sign of the final component to negative if -1 (alpha times ((sigma squared) squared) in Ord (1975) equation B.1).
Extra Bayesian control arguments
- ldet_method
default “SE_classic”; equivalent to the
method
argument inlagsarlm
- interval
default
c(-1, 1)
; used unmodified or set internally byjacobianSetup
- ndraw
default
2500L
; integer total number of draws- nomit
default
500L
; integer total number of omitted burn-in draws- thin
default
1L
; integer thinning proportion- verbose
default
FALSE
; inverse ofquiet
argument inlagsarlm
- detval
default
NULL
; not yet in use, precomputed matrix of log determinants- prior
a list with the following components:
- rhoMH, lambdaMH
default FALSE; use Metropolis or griddy Gibbs
- Tbeta
default
NULL
; values of the betas variance-covariance matrix, set todiag(k)*1e+12
ifNULL
- c_beta
default
NULL
; values of the betas set to 0 ifNULL
- rho
default
0.5
; value of the autoregressive coefficient- sige
default
1
; value of the residual variance- nu
default
0
; informative Gamma(nu,d0) prior on sige- d0
default
0
; informative Gamma(nu,d0) prior on sige- a1
default
1.01
; parameter for beta(a1,a2) prior on rho- a2
default
1.01
; parameter for beta(a1,a2) prior on rho- cc
default
0.2
; initial tuning parameter for M-H sampling- gG_sige
default TRUE; include sige in lambda griddy Gibbs update
- cc1
default
0.2
; initial tuning parameter for M-H sampling- cc2
default
0.2
; initial tuning parameter for M-H sampling
Details
From version 1.4.1, functions for models including spatially lagged independent variables warn on fitting if any of the right-hand side variables are factors. This is because the interpretation of coefficients that are not slopes is unclear when the variable is not interpretable on an unbounded line, such as factors. Factor variable names are shown with the suffix “(F)”, others “dy/dx” in output from impact methods. A discussion can be found at https://github.com/rsbivand/eqc25_talk.
Author
Roger Bivand Roger.Bivand@nhh.no, with thanks to Abhirup Mallik and Virgilio Gómez-Rubio for initial coding GSoC 2011
Examples
#require("spdep", quietly=TRUE)
data(oldcol, package="spdep")
lw <- spdep::nb2listw(COL.nb, style="W")
require("coda", quietly=TRUE)
set.seed(1)
COL.err.Bayes <- spBreg_err(CRIME ~ INC + HOVAL, data=COL.OLD, listw=lw)
print(summary(COL.err.Bayes))
#>
#> Iterations = 501:2500
#> Thinning interval = 1
#> Number of chains = 1
#> Sample size per chain = 2000
#>
#> 1. Empirical mean and standard deviation for each variable,
#> plus standard error of the mean:
#>
#> Mean SD Naive SE Time-series SE
#> (Intercept) 59.6672 6.99422 0.156396 0.156396
#> INC -0.9419 0.39895 0.008921 0.008921
#> HOVAL -0.3002 0.09986 0.002233 0.002233
#> lambda 0.5664 0.15362 0.003435 0.003435
#> sige 115.0226 25.92927 0.579796 0.622135
#>
#> 2. Quantiles for each variable:
#>
#> 2.5% 25% 50% 75% 97.5%
#> (Intercept) 45.6439 55.3852 60.1049 64.5665 72.1349
#> INC -1.7360 -1.2051 -0.9453 -0.6799 -0.1320
#> HOVAL -0.4979 -0.3676 -0.3002 -0.2337 -0.1058
#> lambda 0.2335 0.4717 0.5773 0.6758 0.8309
#> sige 74.4246 96.7347 111.7603 129.2463 176.3327
#>
print(raftery.diag(COL.err.Bayes, r=0.01))
#>
#> Quantile (q) = 0.025
#> Accuracy (r) = +/- 0.01
#> Probability (s) = 0.95
#>
#> Burn-in Total Lower bound Dependence
#> (M) (N) (Nmin) factor (I)
#> (Intercept) 3 1052 937 1.120
#> INC 2 969 937 1.030
#> HOVAL 3 1052 937 1.120
#> lambda 3 1010 937 1.080
#> sige 2 930 937 0.993
#>
# \dontrun{
ev <- eigenw(lw)
W <- as(lw, "CsparseMatrix")
trMatc <- trW(W, type="mult")
set.seed(1)
COL.err.Bayes <- spBreg_err(CRIME ~ INC + HOVAL, data=COL.OLD, listw=lw,
control=list(prior=list(lambdaMH=TRUE)))
print(summary(COL.err.Bayes))
#>
#> Iterations = 501:2500
#> Thinning interval = 1
#> Number of chains = 1
#> Sample size per chain = 2000
#>
#> 1. Empirical mean and standard deviation for each variable,
#> plus standard error of the mean:
#>
#> Mean SD Naive SE Time-series SE
#> (Intercept) 59.2417 7.7902 0.174194 0.276359
#> INC -0.9134 0.4001 0.008947 0.013691
#> HOVAL -0.3026 0.1008 0.002254 0.002254
#> lambda 0.6065 0.1501 0.003357 0.008760
#> sige 116.9971 26.9205 0.601961 0.673301
#>
#> 2. Quantiles for each variable:
#>
#> 2.5% 25% 50% 75% 97.5%
#> (Intercept) 43.6678 54.8568 59.6432 64.2065 73.1577
#> INC -1.6733 -1.1882 -0.9196 -0.6463 -0.1114
#> HOVAL -0.5098 -0.3694 -0.2988 -0.2345 -0.1026
#> lambda 0.2943 0.5064 0.6194 0.7124 0.8693
#> sige 74.4982 97.7954 113.7999 131.9776 179.2701
#>
print(raftery.diag(COL.err.Bayes, r=0.01))
#>
#> Quantile (q) = 0.025
#> Accuracy (r) = +/- 0.01
#> Probability (s) = 0.95
#>
#> Burn-in Total Lower bound Dependence
#> (M) (N) (Nmin) factor (I)
#> (Intercept) 3 1010 937 1.080
#> INC 2 969 937 1.030
#> HOVAL 2 892 937 0.952
#> lambda 17 4454 937 4.750
#> sige 2 892 937 0.952
#>
set.seed(1)
COL.err.Bayes <- spBreg_err(CRIME ~ INC + HOVAL, data=COL.OLD, listw=lw,
Durbin=TRUE)
print(summary(COL.err.Bayes))
#>
#> Iterations = 501:2500
#> Thinning interval = 1
#> Number of chains = 1
#> Sample size per chain = 2000
#>
#> 1. Empirical mean and standard deviation for each variable,
#> plus standard error of the mean:
#>
#> Mean SD Naive SE Time-series SE
#> (Intercept) 72.66406 11.7668 0.263114 0.263114
#> INC -1.01634 0.3765 0.008418 0.008418
#> HOVAL -0.28361 0.1040 0.002327 0.002327
#> lag.INC -1.05926 0.7354 0.016444 0.016444
#> lag.HOVAL 0.09212 0.2371 0.005301 0.005481
#> lambda 0.48878 0.1732 0.003873 0.003873
#> sige 114.63644 26.6590 0.596113 0.649432
#>
#> 2. Quantiles for each variable:
#>
#> 2.5% 25% 50% 75% 97.5%
#> (Intercept) 48.2262 65.57362 73.00201 80.0296 95.74364
#> INC -1.7385 -1.26995 -1.02303 -0.7643 -0.25901
#> HOVAL -0.4882 -0.35461 -0.28094 -0.2146 -0.08063
#> lag.INC -2.4036 -1.53159 -1.08683 -0.6139 0.41978
#> lag.HOVAL -0.3765 -0.05964 0.09356 0.2514 0.55517
#> lambda 0.1165 0.37669 0.49775 0.6118 0.79690
#> sige 73.3727 95.84235 110.62143 128.1996 176.45402
#>
print(summary(impacts(COL.err.Bayes)))
#> Impact measures (SDEM, MCMC, n):
#> Direct Indirect Total
#> INC dy/dx -1.016339 -1.05925739 -2.0755967
#> HOVAL dy/dx -0.283606 0.09212363 -0.1914823
#> ========================================================
#> Standard errors:
#> Direct Indirect Total
#> INC dy/dx 0.35676002 0.6968581 0.8161545
#> HOVAL dy/dx 0.09859437 0.2246496 0.2705969
#> ========================================================
#> Z-values:
#> Direct Indirect Total
#> INC dy/dx -2.848804 -1.520048 -2.5431420
#> HOVAL dy/dx -2.876492 0.410077 -0.7076294
#>
#> p-values:
#> Direct Indirect Total
#> INC dy/dx 0.0043884 0.12850 0.010986
#> HOVAL dy/dx 0.0040212 0.68175 0.479175
#>
print(raftery.diag(COL.err.Bayes, r=0.01))
#>
#> Quantile (q) = 0.025
#> Accuracy (r) = +/- 0.01
#> Probability (s) = 0.95
#>
#> Burn-in Total Lower bound Dependence
#> (M) (N) (Nmin) factor (I)
#> (Intercept) 2 930 937 0.993
#> INC 3 1010 937 1.080
#> HOVAL 3 1010 937 1.080
#> lag.INC 3 1010 937 1.080
#> lag.HOVAL 2 930 937 0.993
#> lambda 2 930 937 0.993
#> sige 2 930 937 0.993
#>
set.seed(1)
COL.err.Bayes <- spBreg_err(CRIME ~ INC + HOVAL, data=COL.OLD, listw=lw,
Durbin=TRUE, control=list(prior=list(lambdaMH=TRUE)))
print(summary(COL.err.Bayes))
#>
#> Iterations = 501:2500
#> Thinning interval = 1
#> Number of chains = 1
#> Sample size per chain = 2000
#>
#> 1. Empirical mean and standard deviation for each variable,
#> plus standard error of the mean:
#>
#> Mean SD Naive SE Time-series SE
#> (Intercept) 71.16419 38.2664 0.855663 0.767700
#> INC -0.97753 0.4121 0.009215 0.010206
#> HOVAL -0.28552 0.1136 0.002539 0.002834
#> lag.INC -0.95168 0.8463 0.018923 0.020058
#> lag.HOVAL 0.06394 0.2596 0.005804 0.006366
#> lambda 0.59930 0.1851 0.004138 0.012759
#> sige 120.00256 28.9862 0.648152 1.082759
#>
#> 2. Quantiles for each variable:
#>
#> 2.5% 25% 50% 75% 97.5%
#> (Intercept) 38.1297 62.84857 72.10787 80.6247 103.90796
#> INC -1.7586 -1.24541 -0.99043 -0.7191 -0.14386
#> HOVAL -0.5127 -0.35962 -0.28458 -0.2090 -0.07283
#> lag.INC -2.4800 -1.51124 -0.99352 -0.4449 0.83674
#> lag.HOVAL -0.4753 -0.09248 0.07433 0.2385 0.54945
#> lambda 0.2181 0.47814 0.60427 0.7269 0.96282
#> sige 75.1226 99.05791 115.87747 136.5226 185.63955
#>
print(summary(impacts(COL.err.Bayes)))
#> Impact measures (SDEM, MCMC, n):
#> Direct Indirect Total
#> INC dy/dx -0.9775250 -0.95168351 -1.9292085
#> HOVAL dy/dx -0.2855169 0.06394437 -0.2215725
#> ========================================================
#> Standard errors:
#> Direct Indirect Total
#> INC dy/dx 0.3905035 0.8019178 1.0007631
#> HOVAL dy/dx 0.1076044 0.2459797 0.3072309
#> ========================================================
#> Z-values:
#> Direct Indirect Total
#> INC dy/dx -2.503243 -1.1867594 -1.927737
#> HOVAL dy/dx -2.653393 0.2599578 -0.721192
#>
#> p-values:
#> Direct Indirect Total
#> INC dy/dx 0.0123061 0.23532 0.053888
#> HOVAL dy/dx 0.0079687 0.79490 0.470791
#>
print(raftery.diag(COL.err.Bayes, r=0.01))
#>
#> Quantile (q) = 0.025
#> Accuracy (r) = +/- 0.01
#> Probability (s) = 0.95
#>
#> Burn-in Total Lower bound Dependence
#> (M) (N) (Nmin) factor (I)
#> (Intercept) 8 2758 937 2.940
#> INC 2 892 937 0.952
#> HOVAL 2 930 937 0.993
#> lag.INC 2 930 937 0.993
#> lag.HOVAL 3 1010 937 1.080
#> lambda 13 3735 937 3.990
#> sige 2 892 937 0.952
#>
set.seed(1)
COL.err.Bayes <- spBreg_err(CRIME ~ INC + HOVAL, data=COL.OLD, listw=lw,
Durbin=~INC)
print(summary(COL.err.Bayes))
#>
#> Iterations = 501:2500
#> Thinning interval = 1
#> Number of chains = 1
#> Sample size per chain = 2000
#>
#> 1. Empirical mean and standard deviation for each variable,
#> plus standard error of the mean:
#>
#> Mean SD Naive SE Time-series SE
#> (Intercept) 74.5421 10.98796 0.245698 0.245698
#> INC -1.0297 0.38097 0.008519 0.008519
#> HOVAL -0.2872 0.09892 0.002212 0.002212
#> lag.INC -0.9231 0.59539 0.013313 0.012533
#> lambda 0.4879 0.17196 0.003845 0.003672
#> sige 112.1211 25.46879 0.569499 0.567575
#>
#> 2. Quantiles for each variable:
#>
#> 2.5% 25% 50% 75% 97.5%
#> (Intercept) 52.0871 67.7889 74.9228 81.7064 95.00088
#> INC -1.7829 -1.2848 -1.0398 -0.7779 -0.24788
#> HOVAL -0.4805 -0.3500 -0.2865 -0.2229 -0.08835
#> lag.INC -2.0862 -1.3212 -0.9222 -0.5493 0.29482
#> lambda 0.1355 0.3737 0.4987 0.6088 0.80100
#> sige 73.0964 94.2454 108.3158 125.9323 173.26053
#>
print(summary(impacts(COL.err.Bayes)))
#> Impact measures (SDEM, MCMC, n):
#> Direct Indirect Total
#> INC dy/dx -1.0296650 -0.9231233 -1.9527884
#> HOVAL dy/dx -0.2872125 NA -0.2872125
#> ========================================================
#> Standard errors:
#> Direct Indirect Total
#> INC dy/dx 0.36509016 0.5705708 0.69932196
#> HOVAL dy/dx 0.09480018 NA 0.09480018
#> ========================================================
#> Z-values:
#> Direct Indirect Total
#> INC dy/dx -2.820303 -1.617895 -2.792402
#> HOVAL dy/dx -3.029662 NA -3.029662
#>
#> p-values:
#> Direct Indirect Total
#> INC dy/dx 0.0047978 0.10569 0.0052318
#> HOVAL dy/dx 0.0024483 NA 0.0024483
#>
print(raftery.diag(COL.err.Bayes, r=0.01))
#>
#> Quantile (q) = 0.025
#> Accuracy (r) = +/- 0.01
#> Probability (s) = 0.95
#>
#> Burn-in Total Lower bound Dependence
#> (M) (N) (Nmin) factor (I)
#> (Intercept) 3 1010 937 1.080
#> INC 2 930 937 0.993
#> HOVAL 2 930 937 0.993
#> lag.INC 2 969 937 1.030
#> lambda 2 892 937 0.952
#> sige 2 969 937 1.030
#>
set.seed(1)
COL.err.Bayes <- spBreg_err(CRIME ~ INC + HOVAL, data=COL.OLD, listw=lw,
Durbin=~INC, control=list(prior=list(lambdaMH=TRUE)))
print(summary(COL.err.Bayes))
#>
#> Iterations = 501:2500
#> Thinning interval = 1
#> Number of chains = 1
#> Sample size per chain = 2000
#>
#> 1. Empirical mean and standard deviation for each variable,
#> plus standard error of the mean:
#>
#> Mean SD Naive SE Time-series SE
#> (Intercept) 72.6350 13.78653 0.308276 0.386681
#> INC -0.9738 0.40021 0.008949 0.008949
#> HOVAL -0.2913 0.09901 0.002214 0.002214
#> lag.INC -0.8515 0.66940 0.014968 0.016021
#> lambda 0.5587 0.18076 0.004042 0.009381
#> sige 115.4245 28.43619 0.635852 0.888693
#>
#> 2. Quantiles for each variable:
#>
#> 2.5% 25% 50% 75% 97.5%
#> (Intercept) 41.9907 65.6687 73.8292 80.9384 96.28811
#> INC -1.7387 -1.2381 -0.9784 -0.7242 -0.15604
#> HOVAL -0.4862 -0.3574 -0.2916 -0.2226 -0.09726
#> lag.INC -2.0723 -1.3020 -0.8936 -0.4434 0.59112
#> lambda 0.1972 0.4340 0.5662 0.6876 0.89245
#> sige 72.3682 95.1949 111.6231 130.8639 184.15297
#>
print(summary(impacts(COL.err.Bayes)))
#> Impact measures (SDEM, MCMC, n):
#> Direct Indirect Total
#> INC dy/dx -0.9738103 -0.8515101 -1.8253205
#> HOVAL dy/dx -0.2913089 NA -0.2913089
#> ========================================================
#> Standard errors:
#> Direct Indirect Total
#> INC dy/dx 0.38352446 0.6414925 0.80945540
#> HOVAL dy/dx 0.09487901 NA 0.09487901
#> ========================================================
#> Z-values:
#> Direct Indirect Total
#> INC dy/dx -2.539109 -1.327389 -2.254998
#> HOVAL dy/dx -3.070320 NA -3.070320
#>
#> p-values:
#> Direct Indirect Total
#> INC dy/dx 0.0111135 0.18438 0.0241334
#> HOVAL dy/dx 0.0021383 NA 0.0021383
#>
print(raftery.diag(COL.err.Bayes, r=0.01))
#>
#> Quantile (q) = 0.025
#> Accuracy (r) = +/- 0.01
#> Probability (s) = 0.95
#>
#> Burn-in Total Lower bound Dependence
#> (M) (N) (Nmin) factor (I)
#> (Intercept) 4 1192 937 1.27
#> INC 2 970 937 1.04
#> HOVAL 3 1010 937 1.08
#> lag.INC 3 1096 937 1.17
#> lambda 10 2853 937 3.04
#> sige 3 1096 937 1.17
#>
set.seed(1)
COL.sacW.B0 <- spBreg_sac(CRIME ~ INC + HOVAL, data=COL.OLD, listw=lw,
Durbin=FALSE, control=list(ndraw=1500L, nomit=500L))
print(summary(COL.sacW.B0))
#>
#> Iterations = 501:1500
#> Thinning interval = 1
#> Number of chains = 1
#> Sample size per chain = 1000
#>
#> 1. Empirical mean and standard deviation for each variable,
#> plus standard error of the mean:
#>
#> Mean SD Naive SE Time-series SE
#> (Intercept) 49.3672 9.23725 0.292108 0.847097
#> INC -0.9926 0.35774 0.011313 0.012624
#> HOVAL -0.2851 0.09743 0.003081 0.003263
#> rho 0.3117 0.19633 0.006208 0.020392
#> lambda 0.1927 0.27130 0.008579 0.027162
#> sige 105.0462 23.96805 0.757936 0.757936
#>
#> 2. Quantiles for each variable:
#>
#> 2.5% 25% 50% 75% 97.5%
#> (Intercept) 31.8783 43.12583 48.9398 55.3574 68.1858
#> INC -1.6832 -1.24316 -0.9897 -0.7519 -0.3168
#> HOVAL -0.4952 -0.34665 -0.2824 -0.2210 -0.1056
#> rho -0.1548 0.21285 0.3406 0.4438 0.6176
#> lambda -0.3788 0.01455 0.2171 0.3794 0.6988
#> sige 68.5025 88.13909 101.6888 118.2545 160.2792
#>
print(summary(impacts(COL.sacW.B0, tr=trMatc), zstats=TRUE, short=TRUE))
#> Error in h(simpleError(msg, call)): error in evaluating the argument 'x' in selecting a method for function 'print': error in evaluating the argument 'object' in selecting a method for function 'summary': !is.null(have_factor_preds) is not TRUE
set.seed(1)
COL.sacW.B1 <- spBreg_sac(CRIME ~ INC + HOVAL, data=COL.OLD, listw=lw,
Durbin=TRUE, control=list(ndraw=1500L, nomit=500L))
print(summary(COL.sacW.B1))
#>
#> Iterations = 501:1500
#> Thinning interval = 1
#> Number of chains = 1
#> Sample size per chain = 1000
#>
#> 1. Empirical mean and standard deviation for each variable,
#> plus standard error of the mean:
#>
#> Mean SD Naive SE Time-series SE
#> (Intercept) 60.1838 24.14794 0.763625 3.169738
#> INC -0.9867 0.36792 0.011635 0.016566
#> HOVAL -0.2846 0.09652 0.003052 0.003052
#> lag.INC -0.8515 0.78628 0.024864 0.062589
#> lag.HOVAL 0.1613 0.22000 0.006957 0.013194
#> rho 0.1808 0.32007 0.010121 0.044327
#> lambda 0.2106 0.32071 0.010142 0.041657
#> sige 100.9463 24.32264 0.769149 0.946662
#>
#> 2. Quantiles for each variable:
#>
#> 2.5% 25% 50% 75% 97.5%
#> (Intercept) 22.2039 42.288683 56.1160 76.4887 113.53259
#> INC -1.7178 -1.234666 -0.9798 -0.7341 -0.29938
#> HOVAL -0.4716 -0.348851 -0.2882 -0.2212 -0.09995
#> lag.INC -2.5407 -1.331269 -0.8239 -0.2905 0.57977
#> lag.HOVAL -0.3140 0.019228 0.1743 0.3207 0.56966
#> rho -0.5076 -0.021227 0.2386 0.4320 0.66552
#> lambda -0.5106 -0.007594 0.2129 0.4482 0.73889
#> sige 64.6867 84.395156 96.9599 114.4101 159.72524
#>
print(summary(impacts(COL.sacW.B1, tr=trMatc), zstats=TRUE, short=TRUE))
#> Error in h(simpleError(msg, call)): error in evaluating the argument 'x' in selecting a method for function 'print': error in evaluating the argument 'object' in selecting a method for function 'summary': !is.null(have_factor_preds) is not TRUE
set.seed(1)
COL.lag.Bayes <- spBreg_lag(CRIME ~ INC + HOVAL, data=COL.OLD,
listw=lw)
print(summary(COL.lag.Bayes))
#>
#> Iterations = 501:2500
#> Thinning interval = 1
#> Number of chains = 1
#> Sample size per chain = 2000
#>
#> 1. Empirical mean and standard deviation for each variable,
#> plus standard error of the mean:
#>
#> Mean SD Naive SE Time-series SE
#> (Intercept) 45.8257 8.04909 0.179983 0.179983
#> INC -1.0451 0.34747 0.007770 0.007770
#> HOVAL -0.2646 0.09467 0.002117 0.002117
#> rho 0.4146 0.12434 0.002780 0.002780
#> sige 107.8827 23.75095 0.531087 0.561152
#>
#> 2. Quantiles for each variable:
#>
#> 2.5% 25% 50% 75% 97.5%
#> (Intercept) 30.3334 40.3138 45.9376 51.0901 61.51043
#> INC -1.7383 -1.2753 -1.0490 -0.8131 -0.35775
#> HOVAL -0.4529 -0.3283 -0.2672 -0.1998 -0.07939
#> rho 0.1645 0.3327 0.4157 0.5008 0.64982
#> sige 70.5778 90.6316 104.8556 120.4706 164.75421
#>
print(summary(impacts(COL.lag.Bayes, tr=trMatc), short=TRUE, zstats=TRUE))
#> Impact measures (lag, trace):
#> Direct Indirect Total
#> INC dy/dx -1.0953808 -0.6898397 -1.7852206
#> HOVAL dy/dx -0.2773374 -0.1746592 -0.4519966
#> ========================================================
#> Simulation results ( variance matrix):
#> ========================================================
#> Simulated standard errors
#> Direct Indirect Total
#> INC dy/dx 0.3680593 0.4927835 0.7676915
#> HOVAL dy/dx 0.1013683 0.1396023 0.2186417
#>
#> Simulated z-values:
#> Direct Indirect Total
#> INC dy/dx -3.000611 -1.553258 -2.435643
#> HOVAL dy/dx -2.762349 -1.416629 -2.185215
#>
#> Simulated p-values:
#> Direct Indirect Total
#> INC dy/dx 0.0026944 0.12036 0.014865
#> HOVAL dy/dx 0.0057387 0.15659 0.028873
print(summary(impacts(COL.lag.Bayes, evalues=ev), short=TRUE, zstats=TRUE))
#> Impact measures (lag, evalues):
#> Direct Indirect Total
#> INC dy/dx -1.0953808 -0.6898397 -1.7852206
#> HOVAL dy/dx -0.2773374 -0.1746592 -0.4519966
#> ========================================================
#> Simulation results ( variance matrix):
#> ========================================================
#> Simulated standard errors
#> Direct Indirect Total
#> INC dy/dx 0.3680597 0.4929806 0.7678282
#> HOVAL dy/dx 0.1013693 0.1397559 0.2187586
#>
#> Simulated z-values:
#> Direct Indirect Total
#> INC dy/dx -3.000610 -1.552692 -2.435246
#> HOVAL dy/dx -2.762325 -1.415159 -2.184105
#>
#> Simulated p-values:
#> Direct Indirect Total
#> INC dy/dx 0.0026944 0.12050 0.014882
#> HOVAL dy/dx 0.0057391 0.15702 0.028955
set.seed(1)
COL.D0.Bayes <- spBreg_lag(CRIME ~ INC + HOVAL, data=COL.OLD,
listw=lw, Durbin=TRUE)
print(summary(COL.D0.Bayes))
#>
#> Iterations = 501:2500
#> Thinning interval = 1
#> Number of chains = 1
#> Sample size per chain = 2000
#>
#> 1. Empirical mean and standard deviation for each variable,
#> plus standard error of the mean:
#>
#> Mean SD Naive SE Time-series SE
#> (Intercept) 45.3107 13.74684 0.307389 0.307389
#> INC -0.9265 0.36097 0.008072 0.008072
#> HOVAL -0.2953 0.09645 0.002157 0.002157
#> lag.INC -0.5809 0.63417 0.014180 0.014180
#> lag.HOVAL 0.2403 0.19712 0.004408 0.004408
#> rho 0.3932 0.16296 0.003644 0.003644
#> sige 109.7428 24.97328 0.558419 0.607251
#>
#> 2. Quantiles for each variable:
#>
#> 2.5% 25% 50% 75% 97.5%
#> (Intercept) 19.18654 35.9928 45.0372 54.5241 72.8590
#> INC -1.61345 -1.1726 -0.9289 -0.6723 -0.2038
#> HOVAL -0.48474 -0.3592 -0.2911 -0.2303 -0.1120
#> lag.INC -1.80959 -1.0071 -0.5624 -0.1629 0.6417
#> lag.HOVAL -0.14728 0.1070 0.2422 0.3712 0.6297
#> rho 0.06053 0.2826 0.3977 0.5078 0.6949
#> sige 71.19421 91.4872 106.1690 123.7826 170.9503
#>
print(summary(impacts(COL.D0.Bayes, tr=trMatc), short=TRUE, zstats=TRUE))
#> Impact measures (mixed, trace):
#> Direct Indirect Total
#> INC dy/dx -1.0285748 -1.4558245 -2.48439926
#> HOVAL dy/dx -0.2819406 0.1912575 -0.09068317
#> ========================================================
#> Simulation results ( variance matrix):
#> ========================================================
#> Simulated standard errors
#> Direct Indirect Total
#> INC dy/dx 0.37638419 1.3242375 1.4653774
#> HOVAL dy/dx 0.09962537 0.3391573 0.3749744
#>
#> Simulated z-values:
#> Direct Indirect Total
#> INC dy/dx -2.786815 -1.2448377 -1.8407366
#> HOVAL dy/dx -2.842974 0.5564694 -0.2520215
#>
#> Simulated p-values:
#> Direct Indirect Total
#> INC dy/dx 0.0053229 0.21319 0.06566
#> HOVAL dy/dx 0.0044695 0.57789 0.80102
set.seed(1)
COL.D1.Bayes <- spBreg_lag(CRIME ~ DISCBD + INC + HOVAL, data=COL.OLD,
listw=lw, Durbin= ~ INC)
print(summary(COL.D1.Bayes))
#>
#> Iterations = 501:2500
#> Thinning interval = 1
#> Number of chains = 1
#> Sample size per chain = 2000
#>
#> 1. Empirical mean and standard deviation for each variable,
#> plus standard error of the mean:
#>
#> Mean SD Naive SE Time-series SE
#> (Intercept) 56.3989 13.4165 0.300001 0.300001
#> DISCBD -4.7822 2.1376 0.047798 0.047798
#> INC -0.9428 0.3523 0.007879 0.007879
#> HOVAL -0.1798 0.0992 0.002218 0.002218
#> lag.INC 0.4245 0.6276 0.014032 0.014032
#> rho 0.1873 0.1817 0.004064 0.004064
#> sige 103.8701 23.2938 0.520866 0.566802
#>
#> 2. Quantiles for each variable:
#>
#> 2.5% 25% 50% 75% 97.5%
#> (Intercept) 30.6341 47.653616 56.3831 65.0069 84.88715
#> DISCBD -9.2415 -6.145948 -4.7564 -3.3769 -0.71137
#> INC -1.6473 -1.172561 -0.9426 -0.7040 -0.26063
#> HOVAL -0.3740 -0.247027 -0.1784 -0.1137 0.01213
#> lag.INC -0.8189 0.005495 0.4286 0.8391 1.62933
#> rho -0.2116 0.072536 0.1836 0.3067 0.53579
#> sige 67.8946 87.054945 100.9815 116.4528 161.25826
#>
print(summary(impacts(COL.D1.Bayes, tr=trMatc), short=TRUE, zstats=TRUE))
#> Impact measures (mixed, trace):
#> Direct Indirect Total
#> DISCBD dy/dx -4.8222636 -1.06231152 -5.8845752
#> INC dy/dx -0.9317427 0.29394858 -0.6377941
#> HOVAL dy/dx -0.1813194 -0.03994341 -0.2212628
#> ========================================================
#> Simulation results ( variance matrix):
#> ========================================================
#> Simulated standard errors
#> Direct Indirect Total
#> DISCBD dy/dx 2.1860440 1.82012990 3.3711902
#> INC dy/dx 0.3550263 0.80940196 0.8890268
#> HOVAL dy/dx 0.1010600 0.06784374 0.1440444
#>
#> Simulated z-values:
#> Direct Indirect Total
#> DISCBD dy/dx -2.229183 -0.7467273 -1.8486744
#> INC dy/dx -2.642226 0.3366857 -0.7486227
#> HOVAL dy/dx -1.811991 -0.7373265 -1.6185484
#>
#> Simulated p-values:
#> Direct Indirect Total
#> DISCBD dy/dx 0.0258017 0.45523 0.064505
#> INC dy/dx 0.0082363 0.73635 0.454085
#> HOVAL dy/dx 0.0699876 0.46092 0.105544
#data(elect80, package="spData")
#lw <- spdep::nb2listw(e80_queen, zero.policy=TRUE)
#el_ml <- lagsarlm(log(pc_turnout) ~ log(pc_college) + log(pc_homeownership)
# + log(pc_income), data=elect80, listw=lw, zero.policy=TRUE, method="LU")
#print(summary(el_ml))
#set.seed(1)
#el_B <- spBreg_lag(log(pc_turnout) ~ log(pc_college) + log(pc_homeownership)
# + log(pc_income), data=elect80, listw=lw, zero.policy=TRUE)
#print(summary(el_B))
#print(el_ml$timings)
#print(attr(el_B, "timings"))
# }