Skip to contents

The spBreg_lag function is an early-release version of the Matlab Spatial Econometrics Toolbox function sar_g.m, using drawing by inversion, and not accommodating heteroskedastic disturbances.

Usage

spBreg_lag(formula, data = list(), listw, na.action, Durbin, type,
    zero.policy=NULL, control=list())
spBreg_sac(formula, data = list(), listw, listw2=NULL, na.action, 
    Durbin, type, zero.policy=NULL, control=list())
spBreg_err(formula, data = list(), listw, na.action, Durbin, etype,
    zero.policy=NULL, control=list())
# S3 method for class 'MCMC_sar_G'
impacts(obj, ..., tr=NULL, listw=NULL, evalues=NULL, Q=NULL)
# S3 method for class 'MCMC_sem_G'
impacts(obj, ..., tr=NULL, listw=NULL, evalues=NULL, Q=NULL)
# S3 method for class 'MCMC_sac_G'
impacts(obj, ..., tr=NULL, listw=NULL, evalues=NULL, Q=NULL)

Arguments

formula

a symbolic description of the model to be fit. The details of model specification are given for lm()

data

an optional data frame containing the variables in the model. By default the variables are taken from the environment which the function is called.

listw, listw2

a listw object created for example by nb2listw

na.action

a function (default options("na.action")), can also be na.omit or na.exclude with consequences for residuals and fitted values - in these cases the weights list will be subsetted to remove NAs in the data. It may be necessary to set zero.policy to TRUE because this subsetting may create no-neighbour observations. Note that only weights lists created without using the glist argument to nb2listw may be subsetted.

Durbin

default FALSE (spatial lag model); if TRUE, full spatial Durbin model; if a formula object, the subset of explanatory variables to lag

type, etype

(use the ‘Durbin=’ argument - retained for backwards compatibility only) default "lag", may be set to "mixed"; when "mixed", the lagged intercept is dropped for spatial weights style "W", that is row-standardised weights, but otherwise included; “Durbin” may be used instead of “mixed”

zero.policy

default NULL, use global option value; if TRUE assign zero to the lagged value of zones without neighbours, if FALSE (default) assign NA

control

list of extra control arguments - see section below

obj

A spatial regression object

...

Arguments passed through to methods in the coda package

tr

A vector of traces of powers of the spatial weights matrix created using trW, for approximate impact measures; if not given, listw must be given for exact measures (for small to moderate spatial weights matrices); the traces must be for the same spatial weights as were used in fitting the spatial regression, and must be row-standardised

evalues

vector of eigenvalues of spatial weights matrix for impacts calculations

Q

default NULL, else an integer number of cumulative power series impacts to calculate if tr is given

Control arguments

tol.opt:

the desired accuracy of the optimization - passed to optimize() (default=square root of double precision machine tolerance, a larger root may be used needed, see help(boston) for an example)

fdHess:

default NULL, then set to (method != "eigen") internally; use fdHess to compute an approximate Hessian using finite differences when using sparse matrix methods; used to make a coefficient covariance matrix when the number of observations is large; may be turned off to save resources if need be

optimHess:

default FALSE, use fdHess from nlme, if TRUE, use optim to calculate Hessian at optimum

optimHessMethod:

default “optimHess”, may be “nlm” or one of the optim methods

compiled_sse:

default FALSE; logical value used in the log likelihood function to choose compiled code for computing SSE

Imult:

default 2; used for preparing the Cholesky decompositions for updating in the Jacobian function

super:

if NULL (default), set to FALSE to use a simplicial decomposition for the sparse Cholesky decomposition and method “Matrix_J”, set to as.logical(NA) for method “Matrix”, if TRUE, use a supernodal decomposition

cheb_q:

default 5; highest power of the approximating polynomial for the Chebyshev approximation

MC_p:

default 16; number of random variates

MC_m:

default 30; number of products of random variates matrix and spatial weights matrix

spamPivot:

default “MMD”, alternative “RCM”

in_coef

default 0.1, coefficient value for initial Cholesky decomposition in “spam_update”

type

default “MC”, used with method “moments”; alternatives “mult” and “moments”, for use if trs is missing, trW

correct

default TRUE, used with method “moments” to compute the Smirnov/Anselin correction term

trunc

default TRUE, used with method “moments” to truncate the Smirnov/Anselin correction term

SE_method

default “LU”, may be “MC”

nrho

default 200, as in SE toolbox; the size of the first stage lndet grid; it may be reduced to for example 40

interpn

default 2000, as in SE toolbox; the size of the second stage lndet grid

small_asy

default TRUE; if the method is not “eigen”, use asymmetric covariances rather than numerical Hessian ones if n <= small

small

default 1500; threshold number of observations for asymmetric covariances when the method is not “eigen”

SElndet

default NULL, may be used to pass a pre-computed SE toolbox style matrix of coefficients and their lndet values to the "SE_classic" and "SE_whichMin" methods

LU_order

default FALSE; used in “LU_prepermutate”, note warnings given for lu method

pre_eig

default NULL; may be used to pass a pre-computed vector of eigenvalues

OrdVsign

default 1; used to set the sign of the final component to negative if -1 (alpha times ((sigma squared) squared) in Ord (1975) equation B.1).

Extra Bayesian control arguments

ldet_method

default “SE_classic”; equivalent to the method argument in lagsarlm

interval

default c(-1, 1); used unmodified or set internally by jacobianSetup

ndraw

default 2500L; integer total number of draws

nomit

default 500L; integer total number of omitted burn-in draws

thin

default 1L; integer thinning proportion

verbose

default FALSE; inverse of quiet argument in lagsarlm

detval

default NULL; not yet in use, precomputed matrix of log determinants

prior

a list with the following components:

rhoMH, lambdaMH

default FALSE; use Metropolis or griddy Gibbs

Tbeta

default NULL; values of the betas variance-covariance matrix, set to diag(k)*1e+12 if NULL

c_beta

default NULL; values of the betas set to 0 if NULL

rho

default 0.5; value of the autoregressive coefficient

sige

default 1; value of the residual variance

nu

default 0; informative Gamma(nu,d0) prior on sige

d0

default 0; informative Gamma(nu,d0) prior on sige

a1

default 1.01; parameter for beta(a1,a2) prior on rho

a2

default 1.01; parameter for beta(a1,a2) prior on rho

cc

default 0.2; initial tuning parameter for M-H sampling

gG_sige

default TRUE; include sige in lambda griddy Gibbs update

cc1

default 0.2; initial tuning parameter for M-H sampling

cc2

default 0.2; initial tuning parameter for M-H sampling

References

LeSage J and RK Pace (2009) Introduction to Spatial Econometrics. CRC Press, Boca Raton.

Author

Roger Bivand Roger.Bivand@nhh.no, with thanks to Abhirup Mallik and Virgilio Gómez-Rubio for initial coding GSoC 2011

Examples

#require("spdep", quietly=TRUE)
data(oldcol, package="spdep")
lw <- spdep::nb2listw(COL.nb, style="W")
require("coda", quietly=TRUE)
set.seed(1)
COL.err.Bayes <- spBreg_err(CRIME ~ INC + HOVAL, data=COL.OLD, listw=lw)
print(summary(COL.err.Bayes))
#> 
#> Iterations = 501:2500
#> Thinning interval = 1 
#> Number of chains = 1 
#> Sample size per chain = 2000 
#> 
#> 1. Empirical mean and standard deviation for each variable,
#>    plus standard error of the mean:
#> 
#>                 Mean       SD Naive SE Time-series SE
#> (Intercept)  59.6663  6.99423 0.156396       0.156396
#> INC          -0.9421  0.39894 0.008921       0.008921
#> HOVAL        -0.3011  0.09984 0.002233       0.002233
#> lambda        0.5664  0.15362 0.003435       0.003435
#> sige        115.0482 25.96435 0.580581       0.622638
#> 
#> 2. Quantiles for each variable:
#> 
#>                2.5%     25%      50%      75%    97.5%
#> (Intercept) 45.6439 55.3852  60.1164  64.5665  72.1349
#> INC         -1.7360 -1.2047  -0.9445  -0.6803  -0.1379
#> HOVAL       -0.4977 -0.3683  -0.3010  -0.2346  -0.1070
#> lambda       0.2335  0.4717   0.5773   0.6758   0.8309
#> sige        74.4245 96.7013 111.9152 129.2463 176.8133
#> 
print(raftery.diag(COL.err.Bayes, r=0.01))
#> 
#> Quantile (q) = 0.025
#> Accuracy (r) = +/- 0.01
#> Probability (s) = 0.95 
#>                                                    
#>              Burn-in  Total Lower bound  Dependence
#>              (M)      (N)   (Nmin)       factor (I)
#>  (Intercept) 3        1052  937          1.120     
#>  INC         2        969   937          1.030     
#>  HOVAL       3        1052  937          1.120     
#>  lambda      3        1010  937          1.080     
#>  sige        2        930   937          0.993     
#> 
if (FALSE) { # \dontrun{
ev <- eigenw(lw)
W <- as(lw, "CsparseMatrix")
trMatc <- trW(W, type="mult")
set.seed(1)
COL.err.Bayes <- spBreg_err(CRIME ~ INC + HOVAL, data=COL.OLD, listw=lw,
 control=list(prior=list(lambdaMH=TRUE)))
print(summary(COL.err.Bayes))
print(raftery.diag(COL.err.Bayes, r=0.01))
set.seed(1)
COL.err.Bayes <- spBreg_err(CRIME ~ INC + HOVAL, data=COL.OLD, listw=lw,
 Durbin=TRUE)
print(summary(COL.err.Bayes))
print(summary(impacts(COL.err.Bayes)))
print(raftery.diag(COL.err.Bayes, r=0.01))
set.seed(1)
COL.err.Bayes <- spBreg_err(CRIME ~ INC + HOVAL, data=COL.OLD, listw=lw,
 Durbin=TRUE, control=list(prior=list(lambdaMH=TRUE)))
print(summary(COL.err.Bayes))
print(summary(impacts(COL.err.Bayes)))
print(raftery.diag(COL.err.Bayes, r=0.01))
set.seed(1)
COL.err.Bayes <- spBreg_err(CRIME ~ INC + HOVAL, data=COL.OLD, listw=lw,
 Durbin=~INC)
print(summary(COL.err.Bayes))
print(summary(impacts(COL.err.Bayes)))
print(raftery.diag(COL.err.Bayes, r=0.01))
set.seed(1)
COL.err.Bayes <- spBreg_err(CRIME ~ INC + HOVAL, data=COL.OLD, listw=lw,
 Durbin=~INC, control=list(prior=list(lambdaMH=TRUE)))
print(summary(COL.err.Bayes))
print(summary(impacts(COL.err.Bayes)))
print(raftery.diag(COL.err.Bayes, r=0.01))
set.seed(1)
COL.sacW.B0 <- spBreg_sac(CRIME ~ INC + HOVAL, data=COL.OLD, listw=lw,
 Durbin=FALSE, control=list(ndraw=1500L, nomit=500L))
print(summary(COL.sacW.B0))
print(summary(impacts(COL.sacW.B0, tr=trMatc), zstats=TRUE, short=TRUE))
set.seed(1)
COL.sacW.B1 <- spBreg_sac(CRIME ~ INC + HOVAL, data=COL.OLD, listw=lw,
 Durbin=TRUE, control=list(ndraw=1500L, nomit=500L))
print(summary(COL.sacW.B1))
print(summary(impacts(COL.sacW.B1, tr=trMatc), zstats=TRUE, short=TRUE))
set.seed(1)
COL.lag.Bayes <- spBreg_lag(CRIME ~ INC + HOVAL, data=COL.OLD,
 listw=lw)
print(summary(COL.lag.Bayes))
print(summary(impacts(COL.lag.Bayes, tr=trMatc), short=TRUE, zstats=TRUE))
print(summary(impacts(COL.lag.Bayes, evalues=ev), short=TRUE, zstats=TRUE))
set.seed(1)
COL.D0.Bayes <- spBreg_lag(CRIME ~ INC + HOVAL, data=COL.OLD,
 listw=lw, Durbin=TRUE)
print(summary(COL.D0.Bayes))
print(summary(impacts(COL.D0.Bayes, tr=trMatc), short=TRUE, zstats=TRUE))
set.seed(1)
COL.D1.Bayes <- spBreg_lag(CRIME ~ DISCBD + INC + HOVAL, data=COL.OLD,
 listw=lw, Durbin= ~ INC)
print(summary(COL.D1.Bayes))
print(summary(impacts(COL.D1.Bayes, tr=trMatc), short=TRUE, zstats=TRUE))
#data(elect80, package="spData")
#lw <- spdep::nb2listw(e80_queen, zero.policy=TRUE)
#el_ml <- lagsarlm(log(pc_turnout) ~ log(pc_college) + log(pc_homeownership)
# + log(pc_income), data=elect80, listw=lw, zero.policy=TRUE, method="LU")
#print(summary(el_ml))
#set.seed(1)
#el_B <- spBreg_lag(log(pc_turnout) ~ log(pc_college) + log(pc_homeownership)
# + log(pc_income), data=elect80, listw=lw, zero.policy=TRUE)
#print(summary(el_B))
#print(el_ml$timings)
#print(attr(el_B, "timings"))
} # }