Bayesian MCMC spatial simultaneous autoregressive model estimation
SET_MCMC.Rd
The spBreg_lag
function is an early-release version of the Matlab Spatial Econometrics Toolbox function sar_g.m
, using drawing by inversion, and not accommodating heteroskedastic disturbances.
Usage
spBreg_lag(formula, data = list(), listw, na.action, Durbin, type,
zero.policy=NULL, control=list())
spBreg_sac(formula, data = list(), listw, listw2=NULL, na.action,
Durbin, type, zero.policy=NULL, control=list())
spBreg_err(formula, data = list(), listw, na.action, Durbin, etype,
zero.policy=NULL, control=list())
# S3 method for class 'MCMC_sar_G'
impacts(obj, ..., tr=NULL, listw=NULL, evalues=NULL, Q=NULL)
# S3 method for class 'MCMC_sem_G'
impacts(obj, ..., tr=NULL, listw=NULL, evalues=NULL, Q=NULL)
# S3 method for class 'MCMC_sac_G'
impacts(obj, ..., tr=NULL, listw=NULL, evalues=NULL, Q=NULL)
Arguments
- formula
a symbolic description of the model to be fit. The details of model specification are given for
lm()
- data
an optional data frame containing the variables in the model. By default the variables are taken from the environment which the function is called.
- listw, listw2
a
listw
object created for example bynb2listw
- na.action
a function (default
options("na.action")
), can also bena.omit
orna.exclude
with consequences for residuals and fitted values - in these cases the weights list will be subsetted to remove NAs in the data. It may be necessary to set zero.policy to TRUE because this subsetting may create no-neighbour observations. Note that only weights lists created without using the glist argument tonb2listw
may be subsetted.- Durbin
default FALSE (spatial lag model); if TRUE, full spatial Durbin model; if a formula object, the subset of explanatory variables to lag
- type, etype
(use the ‘Durbin=’ argument - retained for backwards compatibility only) default "lag", may be set to "mixed"; when "mixed", the lagged intercept is dropped for spatial weights style "W", that is row-standardised weights, but otherwise included; “Durbin” may be used instead of “mixed”
- zero.policy
default NULL, use global option value; if TRUE assign zero to the lagged value of zones without neighbours, if FALSE (default) assign NA
- control
list of extra control arguments - see section below
- obj
A spatial regression object
- ...
Arguments passed through to methods in the coda package
- tr
A vector of traces of powers of the spatial weights matrix created using
trW
, for approximate impact measures; if not given,listw
must be given for exact measures (for small to moderate spatial weights matrices); the traces must be for the same spatial weights as were used in fitting the spatial regression, and must be row-standardised- evalues
vector of eigenvalues of spatial weights matrix for impacts calculations
- Q
default NULL, else an integer number of cumulative power series impacts to calculate if
tr
is given
Control arguments
- tol.opt:
the desired accuracy of the optimization - passed to
optimize()
(default=square root of double precision machine tolerance, a larger root may be used needed, see help(boston) for an example)- fdHess:
default NULL, then set to (method != "eigen") internally; use
fdHess
to compute an approximate Hessian using finite differences when using sparse matrix methods; used to make a coefficient covariance matrix when the number of observations is large; may be turned off to save resources if need be- optimHess:
default FALSE, use
fdHess
from nlme, if TRUE, useoptim
to calculate Hessian at optimum- optimHessMethod:
default “optimHess”, may be “nlm” or one of the
optim
methods- compiled_sse:
default FALSE; logical value used in the log likelihood function to choose compiled code for computing SSE
- Imult:
default 2; used for preparing the Cholesky decompositions for updating in the Jacobian function
- super:
if NULL (default), set to FALSE to use a simplicial decomposition for the sparse Cholesky decomposition and method “Matrix_J”, set to
as.logical(NA)
for method “Matrix”, if TRUE, use a supernodal decomposition- cheb_q:
default 5; highest power of the approximating polynomial for the Chebyshev approximation
- MC_p:
default 16; number of random variates
- MC_m:
default 30; number of products of random variates matrix and spatial weights matrix
- spamPivot:
default “MMD”, alternative “RCM”
- in_coef
default 0.1, coefficient value for initial Cholesky decomposition in “spam_update”
- type
default “MC”, used with method “moments”; alternatives “mult” and “moments”, for use if
trs
is missing,trW
- correct
default TRUE, used with method “moments” to compute the Smirnov/Anselin correction term
- trunc
default TRUE, used with method “moments” to truncate the Smirnov/Anselin correction term
- SE_method
default “LU”, may be “MC”
- nrho
default 200, as in SE toolbox; the size of the first stage lndet grid; it may be reduced to for example 40
- interpn
default 2000, as in SE toolbox; the size of the second stage lndet grid
- small_asy
default TRUE; if the method is not “eigen”, use asymmetric covariances rather than numerical Hessian ones if n <= small
- small
default 1500; threshold number of observations for asymmetric covariances when the method is not “eigen”
- SElndet
default NULL, may be used to pass a pre-computed SE toolbox style matrix of coefficients and their lndet values to the "SE_classic" and "SE_whichMin" methods
- LU_order
default FALSE; used in “LU_prepermutate”, note warnings given for
lu
method- pre_eig
default NULL; may be used to pass a pre-computed vector of eigenvalues
- OrdVsign
default 1; used to set the sign of the final component to negative if -1 (alpha times ((sigma squared) squared) in Ord (1975) equation B.1).
Extra Bayesian control arguments
- ldet_method
default “SE_classic”; equivalent to the
method
argument inlagsarlm
- interval
default
c(-1, 1)
; used unmodified or set internally byjacobianSetup
- ndraw
default
2500L
; integer total number of draws- nomit
default
500L
; integer total number of omitted burn-in draws- thin
default
1L
; integer thinning proportion- verbose
default
FALSE
; inverse ofquiet
argument inlagsarlm
- detval
default
NULL
; not yet in use, precomputed matrix of log determinants- prior
a list with the following components:
- rhoMH, lambdaMH
default FALSE; use Metropolis or griddy Gibbs
- Tbeta
default
NULL
; values of the betas variance-covariance matrix, set todiag(k)*1e+12
ifNULL
- c_beta
default
NULL
; values of the betas set to 0 ifNULL
- rho
default
0.5
; value of the autoregressive coefficient- sige
default
1
; value of the residual variance- nu
default
0
; informative Gamma(nu,d0) prior on sige- d0
default
0
; informative Gamma(nu,d0) prior on sige- a1
default
1.01
; parameter for beta(a1,a2) prior on rho- a2
default
1.01
; parameter for beta(a1,a2) prior on rho- cc
default
0.2
; initial tuning parameter for M-H sampling- gG_sige
default TRUE; include sige in lambda griddy Gibbs update
- cc1
default
0.2
; initial tuning parameter for M-H sampling- cc2
default
0.2
; initial tuning parameter for M-H sampling
Author
Roger Bivand Roger.Bivand@nhh.no, with thanks to Abhirup Mallik and Virgilio Gómez-Rubio for initial coding GSoC 2011
Examples
#require("spdep", quietly=TRUE)
data(oldcol, package="spdep")
lw <- spdep::nb2listw(COL.nb, style="W")
require("coda", quietly=TRUE)
set.seed(1)
COL.err.Bayes <- spBreg_err(CRIME ~ INC + HOVAL, data=COL.OLD, listw=lw)
print(summary(COL.err.Bayes))
#>
#> Iterations = 501:2500
#> Thinning interval = 1
#> Number of chains = 1
#> Sample size per chain = 2000
#>
#> 1. Empirical mean and standard deviation for each variable,
#> plus standard error of the mean:
#>
#> Mean SD Naive SE Time-series SE
#> (Intercept) 59.6663 6.99423 0.156396 0.156396
#> INC -0.9421 0.39894 0.008921 0.008921
#> HOVAL -0.3011 0.09984 0.002233 0.002233
#> lambda 0.5664 0.15362 0.003435 0.003435
#> sige 115.0482 25.96435 0.580581 0.622638
#>
#> 2. Quantiles for each variable:
#>
#> 2.5% 25% 50% 75% 97.5%
#> (Intercept) 45.6439 55.3852 60.1164 64.5665 72.1349
#> INC -1.7360 -1.2047 -0.9445 -0.6803 -0.1379
#> HOVAL -0.4977 -0.3683 -0.3010 -0.2346 -0.1070
#> lambda 0.2335 0.4717 0.5773 0.6758 0.8309
#> sige 74.4245 96.7013 111.9152 129.2463 176.8133
#>
print(raftery.diag(COL.err.Bayes, r=0.01))
#>
#> Quantile (q) = 0.025
#> Accuracy (r) = +/- 0.01
#> Probability (s) = 0.95
#>
#> Burn-in Total Lower bound Dependence
#> (M) (N) (Nmin) factor (I)
#> (Intercept) 3 1052 937 1.120
#> INC 2 969 937 1.030
#> HOVAL 3 1052 937 1.120
#> lambda 3 1010 937 1.080
#> sige 2 930 937 0.993
#>
# \dontrun{
ev <- eigenw(lw)
W <- as(lw, "CsparseMatrix")
trMatc <- trW(W, type="mult")
set.seed(1)
COL.err.Bayes <- spBreg_err(CRIME ~ INC + HOVAL, data=COL.OLD, listw=lw,
control=list(prior=list(lambdaMH=TRUE)))
print(summary(COL.err.Bayes))
#>
#> Iterations = 501:2500
#> Thinning interval = 1
#> Number of chains = 1
#> Sample size per chain = 2000
#>
#> 1. Empirical mean and standard deviation for each variable,
#> plus standard error of the mean:
#>
#> Mean SD Naive SE Time-series SE
#> (Intercept) 59.5304 8.9916 0.201059 0.275885
#> INC -0.9142 0.3969 0.008874 0.011266
#> HOVAL -0.3027 0.1021 0.002282 0.002282
#> lambda 0.6011 0.1502 0.003358 0.007723
#> sige 116.6773 27.9707 0.625443 0.660902
#>
#> 2. Quantiles for each variable:
#>
#> 2.5% 25% 50% 75% 97.5%
#> (Intercept) 44.0607 54.802 59.6796 64.2473 72.9189
#> INC -1.6866 -1.184 -0.9177 -0.6422 -0.1429
#> HOVAL -0.5102 -0.370 -0.3005 -0.2359 -0.1019
#> lambda 0.2826 0.511 0.6135 0.7023 0.8676
#> sige 74.1788 97.167 113.2955 130.2498 182.6024
#>
print(raftery.diag(COL.err.Bayes, r=0.01))
#>
#> Quantile (q) = 0.025
#> Accuracy (r) = +/- 0.01
#> Probability (s) = 0.95
#>
#> Burn-in Total Lower bound Dependence
#> (M) (N) (Nmin) factor (I)
#> (Intercept) 3 1143 937 1.220
#> INC 2 969 937 1.030
#> HOVAL 2 892 937 0.952
#> lambda 17 4454 937 4.750
#> sige 2 930 937 0.993
#>
set.seed(1)
COL.err.Bayes <- spBreg_err(CRIME ~ INC + HOVAL, data=COL.OLD, listw=lw,
Durbin=TRUE)
print(summary(COL.err.Bayes))
#>
#> Iterations = 501:2500
#> Thinning interval = 1
#> Number of chains = 1
#> Sample size per chain = 2000
#>
#> 1. Empirical mean and standard deviation for each variable,
#> plus standard error of the mean:
#>
#> Mean SD Naive SE Time-series SE
#> (Intercept) 72.66146 11.7776 0.263354 0.263354
#> INC -1.01262 0.3770 0.008430 0.008430
#> HOVAL -0.28032 0.1057 0.002363 0.002363
#> lag.INC -1.07754 0.7304 0.016333 0.016333
#> lag.HOVAL 0.09151 0.2395 0.005356 0.005702
#> lambda 0.48878 0.1732 0.003873 0.003873
#> sige 114.61924 26.8321 0.599984 0.653970
#>
#> 2. Quantiles for each variable:
#>
#> 2.5% 25% 50% 75% 97.5%
#> (Intercept) 48.2063 65.55434 73.00434 80.0130 95.69254
#> INC -1.7385 -1.27266 -1.01806 -0.7591 -0.25126
#> HOVAL -0.4883 -0.35221 -0.27730 -0.2111 -0.07194
#> lag.INC -2.4291 -1.54241 -1.09764 -0.6156 0.38091
#> lag.HOVAL -0.3892 -0.06269 0.09028 0.2468 0.55625
#> lambda 0.1165 0.37669 0.49775 0.6118 0.79690
#> sige 73.4822 95.76803 110.90096 128.2633 177.00970
#>
print(summary(impacts(COL.err.Bayes)))
#> Impact measures (SDEM, MCMC, n):
#> Direct Indirect Total
#> INC -1.0126232 -1.07753574 -2.090159
#> HOVAL -0.2803178 0.09150584 -0.188812
#> ========================================================
#> Standard errors:
#> Direct Indirect Total
#> INC 0.3572678 0.6921465 0.8161704
#> HOVAL 0.1001564 0.2269602 0.2729228
#> ========================================================
#> Z-values:
#> Direct Indirect Total
#> INC -2.834354 -1.5568031 -2.5609345
#> HOVAL -2.798799 0.4031801 -0.6918146
#>
#> p-values:
#> Direct Indirect Total
#> INC 0.0045918 0.11952 0.010439
#> HOVAL 0.0051293 0.68682 0.489054
#>
print(raftery.diag(COL.err.Bayes, r=0.01))
#>
#> Quantile (q) = 0.025
#> Accuracy (r) = +/- 0.01
#> Probability (s) = 0.95
#>
#> Burn-in Total Lower bound Dependence
#> (M) (N) (Nmin) factor (I)
#> (Intercept) 2 930 937 0.993
#> INC 3 1096 937 1.170
#> HOVAL 3 1052 937 1.120
#> lag.INC 2 892 937 0.952
#> lag.HOVAL 2 930 937 0.993
#> lambda 2 930 937 0.993
#> sige 2 930 937 0.993
#>
set.seed(1)
COL.err.Bayes <- spBreg_err(CRIME ~ INC + HOVAL, data=COL.OLD, listw=lw,
Durbin=TRUE, control=list(prior=list(lambdaMH=TRUE)))
print(summary(COL.err.Bayes))
#>
#> Iterations = 501:2500
#> Thinning interval = 1
#> Number of chains = 1
#> Sample size per chain = 2000
#>
#> 1. Empirical mean and standard deviation for each variable,
#> plus standard error of the mean:
#>
#> Mean SD Naive SE Time-series SE
#> (Intercept) 71.32778 18.0269 0.403094 0.382389
#> INC -0.99169 0.3978 0.008896 0.008896
#> HOVAL -0.27840 0.1115 0.002494 0.002494
#> lag.INC -0.95360 0.8293 0.018543 0.021986
#> lag.HOVAL 0.06812 0.2557 0.005717 0.005904
#> lambda 0.58149 0.1895 0.004238 0.010821
#> sige 119.53101 28.0026 0.626158 0.839390
#>
#> 2. Quantiles for each variable:
#>
#> 2.5% 25% 50% 75% 97.5%
#> (Intercept) 37.9514 62.67210 72.07446 80.4143 99.84194
#> INC -1.7844 -1.24565 -0.99600 -0.7380 -0.21146
#> HOVAL -0.4977 -0.35256 -0.27741 -0.2077 -0.05449
#> lag.INC -2.4414 -1.50144 -0.99500 -0.4736 0.79318
#> lag.HOVAL -0.4571 -0.09065 0.07663 0.2398 0.55644
#> lambda 0.1781 0.45916 0.59241 0.7231 0.90210
#> sige 75.6501 99.39723 116.16515 135.1114 184.61449
#>
print(summary(impacts(COL.err.Bayes)))
#> Impact measures (SDEM, MCMC, n):
#> Direct Indirect Total
#> INC -0.9916937 -0.95359651 -1.945290
#> HOVAL -0.2784029 0.06812087 -0.210282
#> ========================================================
#> Standard errors:
#> Direct Indirect Total
#> INC 0.3770053 0.7858279 0.9655825
#> HOVAL 0.1056706 0.2422841 0.2998021
#> ========================================================
#> Z-values:
#> Direct Indirect Total
#> INC -2.63045 -1.2134929 -2.0146288
#> HOVAL -2.63463 0.2811611 -0.7014028
#>
#> p-values:
#> Direct Indirect Total
#> INC 0.0085272 0.22494 0.043944
#> HOVAL 0.0084229 0.77859 0.483052
#>
print(raftery.diag(COL.err.Bayes, r=0.01))
#>
#> Quantile (q) = 0.025
#> Accuracy (r) = +/- 0.01
#> Probability (s) = 0.95
#>
#> Burn-in Total Lower bound Dependence
#> (M) (N) (Nmin) factor (I)
#> (Intercept) 4 1192 937 1.270
#> INC 2 930 937 0.993
#> HOVAL 2 930 937 0.993
#> lag.INC 3 1013 937 1.080
#> lag.HOVAL 2 969 937 1.030
#> lambda 17 4647 937 4.960
#> sige 3 1052 937 1.120
#>
set.seed(1)
COL.err.Bayes <- spBreg_err(CRIME ~ INC + HOVAL, data=COL.OLD, listw=lw,
Durbin=~INC)
print(summary(COL.err.Bayes))
#>
#> Iterations = 501:2500
#> Thinning interval = 1
#> Number of chains = 1
#> Sample size per chain = 2000
#>
#> 1. Empirical mean and standard deviation for each variable,
#> plus standard error of the mean:
#>
#> Mean SD Naive SE Time-series SE
#> (Intercept) 74.5397 10.99197 0.245788 0.245788
#> INC -1.0322 0.38342 0.008574 0.008574
#> HOVAL -0.2861 0.09986 0.002233 0.002233
#> lag.INC -0.9210 0.59074 0.013209 0.012745
#> lambda 0.4879 0.17196 0.003845 0.003672
#> sige 112.0817 25.47154 0.569561 0.568218
#>
#> 2. Quantiles for each variable:
#>
#> 2.5% 25% 50% 75% 97.5%
#> (Intercept) 52.0865 67.8131 74.9358 81.6887 95.01447
#> INC -1.7848 -1.2893 -1.0382 -0.7802 -0.24806
#> HOVAL -0.4794 -0.3497 -0.2860 -0.2203 -0.08916
#> lag.INC -2.0803 -1.3131 -0.9206 -0.5438 0.27869
#> lambda 0.1355 0.3737 0.4987 0.6088 0.80100
#> sige 73.0279 94.1912 108.3141 125.5058 173.13320
#>
print(summary(impacts(COL.err.Bayes)))
#> Impact measures (SDEM, MCMC, n):
#> Direct Indirect Total
#> INC -1.0321950 -0.9210334 -1.9532284
#> HOVAL -0.2860547 NA -0.2860547
#> ========================================================
#> Standard errors:
#> Direct Indirect Total
#> INC 0.36744116 0.5661167 0.69838314
#> HOVAL 0.09570117 NA 0.09570117
#> ========================================================
#> Z-values:
#> Direct Indirect Total
#> INC -2.809144 -1.626932 -2.796786
#> HOVAL -2.989041 NA -2.989041
#>
#> p-values:
#> Direct Indirect Total
#> INC 0.0049673 0.10375 0.0051614
#> HOVAL 0.0027985 NA 0.0027985
#>
print(raftery.diag(COL.err.Bayes, r=0.01))
#>
#> Quantile (q) = 0.025
#> Accuracy (r) = +/- 0.01
#> Probability (s) = 0.95
#>
#> Burn-in Total Lower bound Dependence
#> (M) (N) (Nmin) factor (I)
#> (Intercept) 3 1010 937 1.080
#> INC 2 930 937 0.993
#> HOVAL 2 930 937 0.993
#> lag.INC 2 930 937 0.993
#> lambda 2 892 937 0.952
#> sige 3 1010 937 1.080
#>
set.seed(1)
COL.err.Bayes <- spBreg_err(CRIME ~ INC + HOVAL, data=COL.OLD, listw=lw,
Durbin=~INC, control=list(prior=list(lambdaMH=TRUE)))
print(summary(COL.err.Bayes))
#>
#> Iterations = 501:2500
#> Thinning interval = 1
#> Number of chains = 1
#> Sample size per chain = 2000
#>
#> 1. Empirical mean and standard deviation for each variable,
#> plus standard error of the mean:
#>
#> Mean SD Naive SE Time-series SE
#> (Intercept) 72.7704 14.13707 0.316114 0.351470
#> INC -0.9865 0.39508 0.008834 0.008834
#> HOVAL -0.2904 0.09878 0.002209 0.002209
#> lag.INC -0.8506 0.67466 0.015086 0.016187
#> lambda 0.5688 0.17577 0.003930 0.009250
#> sige 116.1069 27.61223 0.617428 0.789579
#>
#> 2. Quantiles for each variable:
#>
#> 2.5% 25% 50% 75% 97.5%
#> (Intercept) 40.6365 65.4333 73.7510 81.2815 97.03388
#> INC -1.7216 -1.2471 -1.0006 -0.7335 -0.15124
#> HOVAL -0.4820 -0.3577 -0.2903 -0.2257 -0.09267
#> lag.INC -2.0923 -1.2942 -0.8863 -0.4615 0.58595
#> lambda 0.2251 0.4389 0.5731 0.6982 0.89484
#> sige 74.0162 96.6611 112.0106 130.7735 181.83168
#>
print(summary(impacts(COL.err.Bayes)))
#> Impact measures (SDEM, MCMC, n):
#> Direct Indirect Total
#> INC -0.9865093 -0.8506053 -1.8371146
#> HOVAL -0.2904128 NA -0.2904128
#> ========================================================
#> Standard errors:
#> Direct Indirect Total
#> INC 0.37860903 0.6465345 0.82479429
#> HOVAL 0.09466321 NA 0.09466321
#> ========================================================
#> Z-values:
#> Direct Indirect Total
#> INC -2.605615 -1.315638 -2.227361
#> HOVAL -3.067853 NA -3.067853
#>
#> p-values:
#> Direct Indirect Total
#> INC 0.009171 0.1883 0.025923
#> HOVAL 0.002156 NA 0.002156
#>
print(raftery.diag(COL.err.Bayes, r=0.01))
#>
#> Quantile (q) = 0.025
#> Accuracy (r) = +/- 0.01
#> Probability (s) = 0.95
#>
#> Burn-in Total Lower bound Dependence
#> (M) (N) (Nmin) factor (I)
#> (Intercept) 10 3436 937 3.670
#> INC 2 930 937 0.993
#> HOVAL 2 892 937 0.952
#> lag.INC 3 1143 937 1.220
#> lambda 13 3579 937 3.820
#> sige 3 1096 937 1.170
#>
set.seed(1)
COL.sacW.B0 <- spBreg_sac(CRIME ~ INC + HOVAL, data=COL.OLD, listw=lw,
Durbin=FALSE, control=list(ndraw=1500L, nomit=500L))
print(summary(COL.sacW.B0))
#>
#> Iterations = 501:1500
#> Thinning interval = 1
#> Number of chains = 1
#> Sample size per chain = 1000
#>
#> 1. Empirical mean and standard deviation for each variable,
#> plus standard error of the mean:
#>
#> Mean SD Naive SE Time-series SE
#> (Intercept) 49.3672 9.23725 0.292108 0.847100
#> INC -0.9925 0.35777 0.011314 0.012624
#> HOVAL -0.2847 0.09798 0.003099 0.003307
#> rho 0.3117 0.19633 0.006208 0.020392
#> lambda 0.1927 0.27130 0.008579 0.027162
#> sige 105.0462 23.96805 0.757936 0.757936
#>
#> 2. Quantiles for each variable:
#>
#> 2.5% 25% 50% 75% 97.5%
#> (Intercept) 31.8783 43.12583 48.9398 55.3573 68.1858
#> INC -1.6817 -1.24459 -0.9872 -0.7516 -0.3168
#> HOVAL -0.4952 -0.34433 -0.2827 -0.2201 -0.1028
#> rho -0.1548 0.21285 0.3406 0.4438 0.6176
#> lambda -0.3788 0.01455 0.2171 0.3794 0.6988
#> sige 68.5025 88.13909 101.6888 118.2545 160.2792
#>
print(summary(impacts(COL.sacW.B0, tr=trMatc), zstats=TRUE, short=TRUE))
#> Impact measures (sac, trace):
#> Direct Indirect Total
#> INC -1.017325 -0.4246675 -1.4419930
#> HOVAL -0.291816 -0.1218143 -0.4136303
#> ========================================================
#> Simulation results ( variance matrix):
#> ========================================================
#> Simulated standard errors
#> Direct Indirect Total
#> INC 0.3668822 0.4180525 0.6533395
#> HOVAL 0.1016518 0.1341689 0.1983041
#>
#> Simulated z-values:
#> Direct Indirect Total
#> INC -2.808257 -1.219897 -2.357550
#> HOVAL -2.910868 -1.106171 -2.240542
#>
#> Simulated p-values:
#> Direct Indirect Total
#> INC 0.0049810 0.22250 0.018396
#> HOVAL 0.0036043 0.26865 0.025056
set.seed(1)
COL.sacW.B1 <- spBreg_sac(CRIME ~ INC + HOVAL, data=COL.OLD, listw=lw,
Durbin=TRUE, control=list(ndraw=1500L, nomit=500L))
print(summary(COL.sacW.B1))
#>
#> Iterations = 501:1500
#> Thinning interval = 1
#> Number of chains = 1
#> Sample size per chain = 1000
#>
#> 1. Empirical mean and standard deviation for each variable,
#> plus standard error of the mean:
#>
#> Mean SD Naive SE Time-series SE
#> (Intercept) 60.1834 24.14830 0.763636 3.169805
#> INC -0.9888 0.36234 0.011458 0.016240
#> HOVAL -0.2861 0.09785 0.003094 0.003094
#> lag.INC -0.8734 0.77381 0.024470 0.060097
#> lag.HOVAL 0.1735 0.21943 0.006939 0.013468
#> rho 0.1808 0.32007 0.010121 0.044327
#> lambda 0.2106 0.32071 0.010142 0.041657
#> sige 100.9463 24.32264 0.769149 0.946662
#>
#> 2. Quantiles for each variable:
#>
#> 2.5% 25% 50% 75% 97.5%
#> (Intercept) 22.2042 42.280965 56.1010 76.4887 113.5316
#> INC -1.6963 -1.228821 -0.9813 -0.7271 -0.3299
#> HOVAL -0.4747 -0.350819 -0.2877 -0.2214 -0.1097
#> lag.INC -2.5274 -1.342333 -0.8558 -0.3281 0.5407
#> lag.HOVAL -0.2876 0.037091 0.1824 0.3249 0.5910
#> rho -0.5076 -0.021227 0.2386 0.4320 0.6655
#> lambda -0.5106 -0.007594 0.2129 0.4482 0.7389
#> sige 64.6867 84.395156 96.9599 114.4101 159.7252
#>
print(summary(impacts(COL.sacW.B1, tr=trMatc), zstats=TRUE, short=TRUE))
#> Impact measures (sacmixed, trace):
#> Direct Indirect Total
#> INC -1.0341008 -1.2391410 -2.2732417
#> HOVAL -0.2808381 0.1434285 -0.1374096
#> ========================================================
#> Simulation results ( variance matrix):
#> ========================================================
#> Simulated standard errors
#> Direct Indirect Total
#> INC 0.3510018 0.9466895 1.0370366
#> HOVAL 0.1010491 0.2773738 0.3171738
#>
#> Simulated z-values:
#> Direct Indirect Total
#> INC -2.966104 -1.4250264 -2.3048033
#> HOVAL -2.775326 0.5576145 -0.3965541
#>
#> Simulated p-values:
#> Direct Indirect Total
#> INC 0.0030160 0.15415 0.021178
#> HOVAL 0.0055146 0.57711 0.691696
set.seed(1)
COL.lag.Bayes <- spBreg_lag(CRIME ~ INC + HOVAL, data=COL.OLD,
listw=lw)
print(summary(COL.lag.Bayes))
#>
#> Iterations = 501:2500
#> Thinning interval = 1
#> Number of chains = 1
#> Sample size per chain = 2000
#>
#> 1. Empirical mean and standard deviation for each variable,
#> plus standard error of the mean:
#>
#> Mean SD Naive SE Time-series SE
#> (Intercept) 45.8256 8.04911 0.179984 0.179984
#> INC -1.0459 0.34746 0.007770 0.007770
#> HOVAL -0.2662 0.09366 0.002094 0.002094
#> rho 0.4146 0.12434 0.002780 0.002780
#> sige 107.8827 23.75095 0.531087 0.561152
#>
#> 2. Quantiles for each variable:
#>
#> 2.5% 25% 50% 75% 97.5%
#> (Intercept) 30.3334 40.3140 45.9376 51.0903 61.51008
#> INC -1.7252 -1.2774 -1.0477 -0.8158 -0.35423
#> HOVAL -0.4533 -0.3312 -0.2686 -0.2019 -0.08107
#> rho 0.1645 0.3327 0.4157 0.5008 0.64982
#> sige 70.5778 90.6316 104.8556 120.4706 164.75421
#>
print(summary(impacts(COL.lag.Bayes, tr=trMatc), short=TRUE, zstats=TRUE))
#> Impact measures (lag, trace):
#> Direct Indirect Total
#> INC -1.0962113 -0.6903627 -1.7865741
#> HOVAL -0.2790451 -0.1757347 -0.4547797
#> ========================================================
#> Simulation results ( variance matrix):
#> ========================================================
#> Simulated standard errors
#> Direct Indirect Total
#> INC 0.3680189 0.4927553 0.7674941
#> HOVAL 0.1002735 0.1393959 0.2171077
#>
#> Simulated z-values:
#> Direct Indirect Total
#> INC -3.003178 -1.554252 -2.437925
#> HOVAL -2.809370 -1.425055 -2.212507
#>
#> Simulated p-values:
#> Direct Indirect Total
#> INC 0.0026718 0.12012 0.014772
#> HOVAL 0.0049639 0.15414 0.026932
print(summary(impacts(COL.lag.Bayes, evalues=ev), short=TRUE, zstats=TRUE))
#> Impact measures (lag, evalues):
#> Direct Indirect Total
#> INC -1.0962113 -0.6903627 -1.7865741
#> HOVAL -0.2790451 -0.1757347 -0.4547797
#> ========================================================
#> Simulation results ( variance matrix):
#> ========================================================
#> Simulated standard errors
#> Direct Indirect Total
#> INC 0.3680193 0.4929524 0.7676307
#> HOVAL 0.1002746 0.1395496 0.2172252
#>
#> Simulated z-values:
#> Direct Indirect Total
#> INC -3.003178 -1.553686 -2.437527
#> HOVAL -2.809345 -1.423571 -2.211367
#>
#> Simulated p-values:
#> Direct Indirect Total
#> INC 0.0026718 0.12026 0.014788
#> HOVAL 0.0049642 0.15457 0.027010
set.seed(1)
COL.D0.Bayes <- spBreg_lag(CRIME ~ INC + HOVAL, data=COL.OLD,
listw=lw, Durbin=TRUE)
print(summary(COL.D0.Bayes))
#>
#> Iterations = 501:2500
#> Thinning interval = 1
#> Number of chains = 1
#> Sample size per chain = 2000
#>
#> 1. Empirical mean and standard deviation for each variable,
#> plus standard error of the mean:
#>
#> Mean SD Naive SE Time-series SE
#> (Intercept) 45.3105 13.74662 0.307384 0.307384
#> INC -0.9246 0.36837 0.008237 0.008237
#> HOVAL -0.2959 0.09778 0.002187 0.002126
#> lag.INC -0.5917 0.63940 0.014297 0.014297
#> lag.HOVAL 0.2443 0.18959 0.004239 0.004239
#> rho 0.3932 0.16296 0.003644 0.003644
#> sige 109.7428 24.97328 0.558419 0.607251
#>
#> 2. Quantiles for each variable:
#>
#> 2.5% 25% 50% 75% 97.5%
#> (Intercept) 19.18483 35.9928 45.0377 54.5246 72.8590
#> INC -1.62602 -1.1728 -0.9198 -0.6633 -0.2117
#> HOVAL -0.48546 -0.3597 -0.2944 -0.2310 -0.1012
#> lag.INC -1.83696 -1.0334 -0.5786 -0.1739 0.6326
#> lag.HOVAL -0.13032 0.1164 0.2417 0.3715 0.6133
#> rho 0.06053 0.2826 0.3977 0.5078 0.6949
#> sige 71.19421 91.4872 106.1690 123.7826 170.9503
#>
print(summary(impacts(COL.D0.Bayes, tr=trMatc), short=TRUE, zstats=TRUE))
#> Impact measures (mixed, trace):
#> Direct Indirect Total
#> INC -1.0277896 -1.471320 -2.49911007
#> HOVAL -0.2820493 0.197023 -0.08502627
#> ========================================================
#> Simulation results ( variance matrix):
#> ========================================================
#> Simulated standard errors
#> Direct Indirect Total
#> INC 0.3831777 1.3179822 1.4600855
#> HOVAL 0.1017455 0.3302926 0.3686927
#>
#> Simulated z-values:
#> Direct Indirect Total
#> INC -2.735660 -1.2618796 -1.8570000
#> HOVAL -2.785094 0.5880284 -0.2417988
#>
#> Simulated p-values:
#> Direct Indirect Total
#> INC 0.0062255 0.20699 0.063311
#> HOVAL 0.0053512 0.55651 0.808936
set.seed(1)
COL.D1.Bayes <- spBreg_lag(CRIME ~ DISCBD + INC + HOVAL, data=COL.OLD,
listw=lw, Durbin= ~ INC)
print(summary(COL.D1.Bayes))
#>
#> Iterations = 501:2500
#> Thinning interval = 1
#> Number of chains = 1
#> Sample size per chain = 2000
#>
#> 1. Empirical mean and standard deviation for each variable,
#> plus standard error of the mean:
#>
#> Mean SD Naive SE Time-series SE
#> (Intercept) 56.4193 13.40915 0.299838 0.299838
#> DISCBD -4.7794 2.14113 0.047877 0.047877
#> INC -0.9422 0.34677 0.007754 0.007754
#> HOVAL -0.1806 0.09837 0.002200 0.002200
#> lag.INC 0.4213 0.63243 0.014141 0.014141
#> rho 0.1873 0.18174 0.004064 0.004064
#> sige 103.8701 23.29384 0.520866 0.566802
#>
#> 2. Quantiles for each variable:
#>
#> 2.5% 25% 50% 75% 97.5%
#> (Intercept) 30.6341 47.656256 56.4238 65.0877 84.808823
#> DISCBD -9.2565 -6.160525 -4.7536 -3.3735 -0.714089
#> INC -1.6336 -1.174123 -0.9427 -0.7090 -0.263713
#> HOVAL -0.3719 -0.247402 -0.1791 -0.1148 0.006329
#> lag.INC -0.8379 -0.005461 0.4129 0.8399 1.648619
#> rho -0.2116 0.072536 0.1836 0.3067 0.535793
#> sige 67.8946 87.054945 100.9815 116.4528 161.258258
#>
print(summary(impacts(COL.D1.Bayes, tr=trMatc), short=TRUE, zstats=TRUE))
#> Impact measures (mixed, trace):
#> Direct Indirect Total
#> DISCBD -4.8194767 -1.06169757 -5.8811743
#> INC -0.9312352 0.29032298 -0.6409122
#> HOVAL -0.1821150 -0.04011868 -0.2222336
#> ========================================================
#> Simulation results ( variance matrix):
#> ========================================================
#> Simulated standard errors
#> Direct Indirect Total
#> DISCBD 2.1896626 1.82105239 3.3751922
#> INC 0.3494541 0.81612759 0.8914013
#> HOVAL 0.1001927 0.06787487 0.1430090
#>
#> Simulated z-values:
#> Direct Indirect Total
#> DISCBD -2.224202 -0.7454394 -1.8451502
#> INC -2.683306 0.3286082 -0.7510717
#> HOVAL -1.835800 -0.7422472 -1.6384544
#>
#> Simulated p-values:
#> Direct Indirect Total
#> DISCBD 0.0261348 0.45601 0.065016
#> INC 0.0072898 0.74245 0.452610
#> HOVAL 0.0663872 0.45794 0.101327
#data(elect80, package="spData")
#lw <- spdep::nb2listw(e80_queen, zero.policy=TRUE)
#el_ml <- lagsarlm(log(pc_turnout) ~ log(pc_college) + log(pc_homeownership)
# + log(pc_income), data=elect80, listw=lw, zero.policy=TRUE, method="LU")
#print(summary(el_ml))
#set.seed(1)
#el_B <- spBreg_lag(log(pc_turnout) ~ log(pc_college) + log(pc_homeownership)
# + log(pc_income), data=elect80, listw=lw, zero.policy=TRUE)
#print(summary(el_B))
#print(el_ml$timings)
#print(attr(el_B, "timings"))
# }