Geostatistics

Geostatistics is concerned with field variables, i.e. variables that

  • vary continuously over space, or space/time
  • can meaningfully be interpolated (predicted) at unobserved locations

That is, not with marked point patterns, such as

  • tree stem diameters
  • coal power plant emissions

although both share the property of being "xyz" data (point coordinates + attributes)

Overview

  • spatial correlation: (semi)variogram, auto-covariogram, auto-correllogram
  • spatial prediction:
    • when is spatial prediction meaningful? fields and point processes.
    • stationarity, ergodicity
    • prediction errors
    • change of support
  • exporartory data analysis: distance distributions, the h-scatterplot
  • computing and modelling variograms
  • kriging prediction
  • regression models with stationary residuals

  • multivariable geostatistics: co-kriging
  • modelling non-stationary processes: transformation, stratification
  • spatiotemporal geostatistics
  • machine learning approaches to spatial prediction

spatial prediction:

Given \(n\) observations \(z(x_1), ...,z(x_n)\) we assume a model, \(Z(x) = Z(x_1),...,Z(x_n)\) where \(Z\) is random, and \(x\) is not.

Spatial prediction (Kriging) is concerned with predicting unobserved values \(Z(x_0)\), by using a linear predictor \[\hat{Z}(x_0) = \sum_{i=1}^n \lambda_i Z(x_i)\] where weights are chosen such that \(E(\hat{Z}(x_0)) = E(Z(x_0))\) and \(Var(\hat{Z}(x_0)-Z(x_0))\) is minimal.

This is meaningful for point patterns only if there is a missing value, i.e. if \(x_0\) has a tree for which we don't know the tree diameter. For field variables, such as air temperature, we have missing values pretty much everywhere, and thus can estimate \(Z(x)\) over a continuous surface.

Stationarity, ergodicity

For finding the Kriging weights, we need to know all (co)variances of and between the \(Z(x_i)\) and \(Z(x_0)\). This can only be done by making stationarity assumptions. E.g. second order stationarity involves

  • stationarity of the mean: \(E(Z(x)) = m\)
  • stationarity of the variance: \(E(Z(x)-m)^2 = C(0)\)
  • stationarity of the covariance: \(E(((Z(x)-m)(Z(x+h)-m))) = C(h)\)

or the slightly weaker intrinsic stationarity works on increments:

  • mean: \(E(Z(x)-Z(x+h)) = 0\)
  • variance: \(E(Z(x)-Z(x+h))^2 = 2 \gamma(h)\)

\(C(h)\) is called the covariogram, \(\gamma(h)\) is called the semivariogram, or variogram.

For second order stationary fields, i.e. \(C(0) < \infty\), we have \[C(0) - C(h) = \gamma(h)\] and the correlogram defined as \(C(h)/C(0)\).

Ergodicity involves the esimation of \(\gamma(h)\), or \(C(h)\), from a single sample of the random field \(Z\). It (very) roughly means that the data must contain the information sought. This is not the case when

  • Using temperature data from Munich to estimate \(\gamma(h)\) for distances larger than a few kilometers, as would be needed for interpolating temperatures over Germany
  • there is no repetition or regularity, but e.g. composition of different processes with very different properties in the data set

Estimating spatial correlation

Spatial correlation is estimated from sample data, by

  • forming (all \(n(n-1)/2\)) pairs \(Z(x_i), Z(x_j)\)
  • computing distances \(h=|x_i-x_j|\)
  • grouping the pairs by distance class
  • for each distance class, compute (half) the average of \((Z(x_i)-Z(x_j))^2\)
  • this gives estimates of \(\gamma(h)\) for a set of distance values
  • to this sample variogram, a parametric model is fitted

  • alternative: use maximum likelihood
    • assumes multivariate Gaussian distribution
    • involves iteratively solving systems of size \(n\)

explorartory data analysis: distance distributions, the h-scatterplot

We will explore an air quality dataset collected from the EEA's airbase (as of, "air quality database"), prepared in the gstat package:

data(DE_RB_2005, package = "gstat")
library(spacetime)
library(stars)
## Loading required package: abind
## Loading required package: sf
## Linking to GEOS 3.7.0, GDAL 2.4.0, PROJ 5.2.0

s = st_as_stars(as(DE_RB_2005, "STFDF"))
# select a single day, omit NA values:
s1 = na.omit(st_as_sf(s[,,100]))
names(s1)[1] = "PM10"

library(gstat)
variogram(PM10~1, s1)
##     np      dist     gamma dir.hor dir.ver   id
## 1    4  18898.55  7.020488       0       0 var1
## 2   23  34364.92  6.440393       0       0 var1
## 3   32  56345.09 11.255562       0       0 var1
## 4   39  77732.46  7.906209       0       0 var1
## 5   55  99506.37  8.341365       0       0 var1
## 6   71 122440.26  9.902675       0       0 var1
## 7   85 143636.56 15.158993       0       0 var1
## 8   80 164572.07 13.839935       0       0 var1
## 9   91 187694.48 12.821504       0       0 var1
## 10  88 209843.86 14.708895       0       0 var1
## 11  98 231000.08 17.078920       0       0 var1
## 12 110 254882.02 22.609394       0       0 var1
## 13 104 276259.20 22.452120       0       0 var1
## 14 101 297576.88 18.176579       0       0 var1
## 15  91 320326.70 28.031442       0       0 var1

suppressPackageStartupMessages(library(xts))
acf(na.omit(as.xts(s[,1])))

plot(variogram(PM10~1, s1))

Exercises 3: variograms

Building on the above dataset, create out plot where:

  • setting the cutoff to a much larger, and a much smaller value
  • setting the width argument to a larger and smaller value
  • the number of point pairs involved is also shown in the plot
  • setting cloud=TRUE - what do the points in this plot represent? (hint: plot the number of point pairs)
  • Save a variogram you like to an object, and fit a model to it using fit.variogram; use vgm() to list the possible models and show.vgms() to get an idea how they look.
  • If you get errors on fit, don't worry:
    • plot the variogram with fitted model (like: plot(v, fit), and examine how it looks
    • try to choose better initial values for nugget (offset), sill (final level), range (distance where correlation no longer increases)

kriging prediction

Settle on some variogram; we choose:

v = variogram(PM10~1, s1)
f = fit.variogram(v, vgm(20, "Lin", 0, 4))
plot(v, f)

Create a grid, using stars:

# load German boundaries
data(air, package = "spacetime")
de <- st_transform(st_as_sf(DE_NUTS1), 32632)
bb = st_bbox(de)
dx = seq(bb[1], bb[3], 10000)
dy = seq(bb[4], bb[2], -10000) # decreases!
st_as_stars(matrix(0., length(dx), length(dy))) %>%
  st_set_dimensions(1, dx) %>%
  st_set_dimensions(2, dy) %>%
  st_set_crs(32632) -> grd

Inverse distance interpolation

i = idw(PM10~1, s1, grd)
## [inverse distance weighted interpolation]

plot(i, reset = FALSE, key.pos = 4) # allow addition
plot(st_geometry(de), add = TRUE, border = 'red')
plot(s1, add = TRUE, col = 'green', pch = 3)

Kriging

kr = krige(PM10~1, s1, grd, f)
## [using ordinary kriging]

plot(kr, reset = FALSE, key.pos = 4) # allow addition
plot(st_geometry(de), add = TRUE, border = 'red')
plot(s1, add = TRUE, col = 'green', pch = 3)

Comparing maps (!)

i[[2]] = NULL # remove this attribute
i$kriging = kr[[1]] # add attribute named "kriging" to i
plot(merge(i))

Exercises (4)

  • Compare the ranges of the kriging and idw map, and the range of the PM10 data. Explain differences and correspondences.
  • Examine the kriging variance error map.
  • Compute a standard error map by taking the square root of the kriging variance
  • Plot this map with the data points overlayed: can you explain the pattern in the errors?
  • Repeat the exercise with the same linear model but with a zero nugget effect, and compare the ranges again.
  • Repeat the exercise with the a spherical model with a short range and zero nugget, and compare the ranges again.
  • Repeat the exercise with the a pure nugget effect model (e.g., vgm(20, "Nug", 0)), and compare the ranges again.

  • Compute (approximate) 95% prediction intervals by adding +/- 2 standard errors to the kriging
  • Create a map with locations where the threshold of 25 has been exceeded, or might have been exceeded (is inside the prediction interval)

prediction errors

Given the kriging predictor, we also obtain a kriging error \[\sigma(x_0) = E((\hat{Z}(x_0)-Z(x_0))^2)\] which

  • is an average measure of error, given that all assumptions are met
  • only depends on (co)variances, and not on data values (i.e., is only configuration dependent)
  • will be Normally distributed if \(Z(s)\) is multivariate Gaussian
  • (ordinary, universal) kriging involves the assumption that the mean process (\(m\)) is unknown, but \(\gamma(h)\) is known

Change of support

Change of support involves estimating block averages of the form \[\frac{1}{|A|}\int_A Z(u)du\]

or other aggregations of the form \[\frac{1}{|A|}\int_A g(Z(u))du\] which could, for instance, estimate the fraction of an area for which a certain threshold is exceeded, if \(g(\cdot)\) is an indicator function.

Change of support (2)

The integrals are usually approximated by a weighted or unweighted sum; the linear average is obtained by block kriging, the non-linear by

  1. stochastic simulation of \(Z\) at the point support,
  2. then applying \(g()\),
  3. then spatially aggregating

Exercises (5)

  • Carry out a block kriging, setting block=c(10000,10000) to estimate 10 km x 10 km block mean values
  • Compare the kriged values to those obtained by point kriging, compare the ranges
  • Compare the standard errors between point and block kriging

regression models with stationary residuals

  • Universal kriging (syn. regression kriging, or external drift kriging) extends the model with a constant mean to that where \(E(Z(x)) = F(x)\beta\), i.e. a multiple linear regression model with known, spatially varying covariates \(F(x)\)

  • The spatial prediction now becomes a combination of estimated trend + predicted residual, leaning on the trend surface when data are remote or weakly correlated

multivariable geostatistics: co-kriging

  • Co-kriging considers multiple variable at once
  • It requires, besides direct variograms, the cross variograms describing cross correlations for pairs of variables
  • It mostly helps (improves prediction) when a secondary variable has higher sampling rate
  • It is needed when we need the estimation error of some composite index, e.g. the sum or difference of a set of variables

modelling non-stationary processes: transformation, stratification

  • Common transformations: Box-Cox, logarithmic
  • Broader group of models: generalized linear mixed models (exponential family); calls for MCMC or INLA
  • Stratification: cut up the area in peaces, model + interpolate for each zone; this gives sudden boundaries at zone boundaries

spatiotemporal geostatistics

  • Can be seen as an extension of co-kriging, but continuous covariance model over space and time is needed
  • Modelling cumbersome, but doable, see https://journal.r-project.org/archive/2016/RJ-2016-014/index.html for a reproducible example
  • Needed if a continuous spatiotemporal surface is needed; kriging per time slice gives sudden jumps, in particular if variograms are re-estimated and changes are not smoothed over time.

machine learning approaches to spatial prediction

  • Difficulty: assumption of independent observations.
  • Needed: strategies for calibration/validation - don't do random, but take out space/time blocks
  • If two observations are close in space, they will have similar features; does the feature similarity yield better predictions, or the spatial correlation?

Stochastic simulation

f = vgm(20, "Sph", 200000, 5)
sim = krige(PM10~1, s1, grd, f, nsim = 100, nmax = 30)
## drawing 100 GLS realisations of beta...
## [using conditional Gaussian simulation]

plot(sim)

Compare mean of simulations with kriged value

kr = krige(PM10~1, s1, grd, f)
## [using ordinary kriging]

kr$mean = st_apply(sim, 1:2, mean)[[1]]
dim(kr[[3]]) = dim(kr[[1]]) # Ooops!
plot(merge(kr[c(1,3)]))

Variogram of simulations

plot(variogram(V1~1, st_as_sf(sim[,,,1])))

Variogram of kriging values

plot(variogram(var1.pred~1, st_as_sf(kr[1])))

Spatial aggregation

Aggregate the fraction of cells where PM10 exceeds 25

a = aggregate(sim, de, function(x) mean(x > 25))
plot(a, max.plot = 10, key.pos = 4)

Summarizing the simulations

Compute 95% confidence limits:

a.ci = st_apply(a, 1, quantile, c(0.025, 0.975))
a.ci
## stars object with 2 dimensions and 1 attribute
## attribute(s):
##      var1         
##  Min.   :0.00000  
##  1st Qu.:0.00000  
##  Median :0.00000  
##  Mean   :0.01008  
##  3rd Qu.:0.00000  
##  Max.   :0.16074  
## dimension(s):
##          from to offset delta                       refsys point
## quantile    1  2     NA    NA                           NA    NA
## geometry    1 16     NA    NA +proj=utm +zone=32 +datum... FALSE
##                                                                     values
## quantile                                                      2.5% , 97.5%
## geometry MULTIPOLYGON (((546832.3 55...,...,MULTIPOLYGON (((622596.9 57...

C.I. for the areal fraction of point values over a state where PM10 > 25 (non-linear spatial aggregation)

plot(aperm(a.ci, 2:1))