# Deterministic Interpolation Methods

Deterministic interpolation Methods generate surfaces from measured points based on the degree of similarity or smoothing.

A. Nearest Neighborhood (NN) and triangulation

One of Deterministic Interpolation Methods The nearest neighbour method assigns the value from the closest observation to a specific grid cell. The NN method is also known as the Thiessen or Vorono method. Triangular Irregular Networks (TINs) are formed from observation points using triangulation.

The method provides the surface slope based on three neighboring points; this slope can be used to calculate the value at given grid-cells.

Both methods are quick and easy, but the interpolated fields do not always look realistic. The lack of success metrics is a disadvantage of both methods. When there are a lot of data points, the methods work best. Ancillary information cannot be included.
Both methods have limited applications in meteorology, but they can be used successfully in dense measurement networks.

B. Inverse Distance Weighting (IDW)

IDW is a more advanced nearest neighbor approach that allows for the inclusion of more observations than just the closest points. A linear combination of the surrounding locations yields the value at a specific grid cell. the most widely used Deterministic Interpolation Methods.

The distance function determines the weight of each observation; the distance function is non-linear. IDW is a precise interpolator.

The method is quick, simple to implement, and can be “tailored” to meet specific needs.

Anisotropy in the source data is allowed by the method.

Ancillary information cannot be included.

Cross validation is used to determine success. The method produces “bull’s eye patterns.”
There is no extrapolation.

All interpolated values are within the data points’ range.

In meteorology, IDW is widely used.

C. Polynomial functions (splines)

Polynomial functions are methods for fitting trend functions to observations using xorder polynomials. Spline functions are the most common. In general, they are global interpolators that meet the criteria of exact interpolators by fitting many polynomials in overlapping neighbourhood regions.

Algorithms are used to smooth the resulting surfaces to ensure that there are no strongly oscillating patterns between the observation points. These algorithms primarily describe the differences between the various spline methods available.
Because errors are not suppressed in the algorithm, it is assumed that the measurements are error-free. When using advanced methods, ancillary data can be included.

Polynomial functions are thought to be a good method for interpolating monthly and yearly climate elements, but they are less suitable for higher temporal resolutions such as days and hours.

D. Linear regression

Linear regression expresses the relationship between one or more explanatory variables and a predicted variable. A straight line is fitted through the data points in its most basic form.
Linear regression models are frequently used as global interpolators.

Linear regression models are deterministic, but when some statistical assumptions about the probability distribution of the predicted variable are taken into account, the method becomes stochastic.

The assumption for deterministic linear regression models is that the regression model can be interpreted physically; for stochastic linear regression models, a normal distribution and spatial independence are also assumed. From a theoretical standpoint, no extrapolations are permitted.

Multiple regression can be used to incorporate ancillary data. Cross validation is used to assess the success of deterministic linear regression models. It can be measured using the explained variance and the regression standard error in stochastic linear regression models.

Linear regression must be carried out using a standard statistical program; maps can be computed using calculator functions in a GIS environment.

D. Artificial Neural Networks (ANN)

Neural networks are statistical data modeling tools that are non-linear. They can be used to model complex relationships between inputs and outputs, as well as to identify patterns in data.

It necessitates a significant amount of computational power.