# Interpolation techniques

Deterministic interpolation techniques generate surfaces from measured points based on the degree of similarity (inverse distance weighted) or smoothing (radial basis functions). The statistical properties of the measured points are used in geostatistical interpolation techniques (kriging).

Geostatistical interpolation The statistical properties of the measured points are used in geostatistical interpolation techniques (kriging). Geostatistical techniques quantify spatial autocorrelation among measured points and take into account the spatial configuration of the sample points surrounding the prediction location.

Geostatistics is a set of methods for estimating values in areas where no samples have been taken, as well as assessing the uncertainty of these estimates.

These functions are critical in many decision-making processes because it is impossible to collect samples at every location in an area of interest in practice.

These methods are a means of constructing models of reality that is, of the phenomenon you are interested in.

There are numerous interpolation methods. Some are quite adaptable and can take into account various aspects of the sample data.

The following interpolation methods are available from Geostatistical Analyst.

1. Areal interpolation

Areal interpolation is defined in most GIS literature as the reaggregation of data from one set of polygons (the source polygons) to another set of polygons (the target polygons). Demographers, for example, must frequently downscale or upscale the administrative units of their data.

If the population was counted at the county level, a demographer may need to downscale the data to predict the population of census blocks. In the case of large-scale redistricting, population predictions for a completely new set of polygons may be required.

Kriging theory is extended to data averaged or aggregated over polygons using this technique. Predictions and standard errors can be calculated for all points within and between the input polygons, and the predictions (along with standard errors) can then be reaggregated to a new set of polygons.

different types of data could used in areal interpolation

1. Gaussian or average data

2. binomial (Rate) data

3. Event (over dispersed Poisson) data

what is it’s limitation ?

1. Non-stationarity

2. Polygons with a wide range of sizes

2. Inverse distance weighted interpolation (IDW)

Inverse distance weighted (IDW) interpolation explicitly assumes that things close to one another are more similar than things farther apart. IDW uses the measured values surrounding the prediction location to predict a value for any unmeasured location.

The measured values that are closest to the prediction location have a greater influence on the predicted value than those that are further away.

According to IDW, each measured point has a local influence that decreases with distance. It gives more weight to points closest to the prediction location, and the weights decrease as the distance increases, hence the name inverse distance weighted.