Curse of Dimensionality refers to a set of problems that arise when working with high-dimensional data. The dimension of a dataset corresponds to the number of attributes/features that exist in a dataset. A dataset with a large number of attributes, generally of the order of hundred or more, is referred to as high dimensional data. Some of the difficulties that come with high dimensional data manifest during analyzing or visualizing the data to identify patterns, and some manifest while training machine learning models. The difficulties related to training machine learning models due to high dimensional data is referred to as ‘**Curse of Dimensionality**’. The popular aspects of curse of dimensionality; ‘data sparsity’ and ‘distance concentration’ are discussed in the following sections.

*Contributed by: Arun K *

**Data Sparsity**

The supervised machine learning models are trained to predict the outcome for a given input data sample accurately. While training a model, the available data is used such that part of the data is used for training the model, and a part of the data is used to evaluate how the model performs on unseen data. This evaluation step helps us establish whether the model is generalized or not. Model generalization refers to the models’ ability to predict the outcome for an unseen input data accurately. It is important to note that the unseen input data has to come from the same distribution as the one used to train the model. A generalized model’s prediction accuracy on the unseen data should be very close to its accuracy on the training data. An effective way to build a generalized model is to capture different possible combinations of the values of predictor variables and the corresponding targets.

For instance, if we are trying to predict a target, that is dependent on two attributes: gender and age group, we should ideally capture the targets for all possible combinations of values for the two attributes as shown in figure 1. If this data is used to train a model that is capable of learning the mapping between the attribute values and the target, its performance could be generalized. As long as the future unseen data comes from this distribution (a combination of values), the model would predict the target accurately.

In the above example, we assume that the target value depends on gender and age group only. If the target depends on a third attribute, let’s say body type, the number of training samples required to cover all the combinations increases phenomenally. The combinations are shown in figure 2. For two variables, we needed eight training samples. For three variables, we need 24 samples.

The above examples show that, as the number of attributes or the dimensions increases, the number of training samples required to generalize a model also increase phenomenally.

In reality, the available training samples may not have observed targets for all combinations of the attributes. This is because some combination occurs more often than others. Due to this, the training samples available for building the model may not capture all possible combinations. This aspect, where the training samples do not capture all combinations, is referred to as ‘**Data** **sparsity**’ or simply ‘**sparsity’** in high dimensional data. Data sparsity is one of the facets of the curse of dimensionality. Training a model with sparse data could lead to high-variance or overfitting condition. This is because while training the model, the model has learnt from the frequently occurring combinations of the attributes and can predict the outcome accurately. In real-time when less frequently occurring combinations are fed to the model, it may not predict the outcome accurately.

**Distance Concentration**

Another facet of curse of dimensionality is ‘**Distance Concentration**’. Distance concentration refers to the problem of all the pairwise distances between different samples/points in the space converging to the same value as the dimensionality of the data increases. Several machine learning models such as clustering or nearest neighbours’ methods use distance-based metrics to identify similar or proximity of the samples. Due to distance concentration, the concept of proximity or similarity of the samples may not be qualitatively relevant in higher dimensions. Figure 3 shows this aspect graphically [1]. A fixed number of random points are generated from a uniform distribution on a ‘d’ dimensional torus. The ‘d’ here corresponds to the number of dimensions considered at a time.

Also Read: Artificial Intelligence Tutorial for Beginners

A density plot of the distances between the points and the probability of frequency of occurrence of the distance is created for different dimensions. For one-dimensional torus, we see that the density is approximately uniform. As the number of dimensions increases, we see that the spread of the frequency plot decreases indicating that distances between different samples or points tend towards a single value as the dimension increases. Figure 4 shows the decrease in the standard deviation of the distribution as the number of dimensions increases.

Aggarwal ^{[2]} presented another interesting aspect of distance concentration. For ‘L_{k}^{’} norm-based distance metrics, their relevance in higher dimensions is subjective to the value of k. The L_{1} norm or Manhattan distance is preferred to the L2 norm or the Euclidean distance for high dimensional data processing. This indicates that the choice of distance metric in algorithms such as KNN or K-means or clustering, that work for lower dimensions may not work for higher dimensions.

**Mitigating Curse of Dimensionality**

To mitigate the problems associated with high dimensional data a suite of techniques generally referred to ‘**Dimensionality reduction techniques**’ are used. Dimensionality reduction techniques fall into one of the two categories- ‘Feature selection’ or ‘Feature extraction’.

**Feature selection Techniques**

In feature selection techniques, the attributes are tested for their worthiness and then selected or eliminated. Some of the commonly used Feature selection techniques are discussed below.

**Low Variance filter**: In this technique, the variance in the distribution of all the attributes in a dataset are compared and attributes with very low variance are eliminated. Attributes that do not have such much variance will assume an almost constant value and do not contribute to the predictability of the model.

**High Correlation filter**: In this technique, the pair wise correlation between attributes are determined. One of the attributes in the pairs that show very high correlation are eliminated and the other retained. The variability in the eliminated attribute is captured through the retained attribute.

**Multicollinearity**: In some cases, high correlation may not be found for pairs of attributes but if each attribute is regressed as a function of others, we may see that variability of some of the attributes are completely captured by the others. This aspect is referred to as multicollinearity and variance Inflation Factor (VIF) is a popular technique used to detect multicollinearity. Attributes with high VIF values, generally greater than 10, are eliminated.

**Feature Ranking**: Decision Tree models such as CART can rank the attributes based on their importance or contribution to the predictability of the model. In high dimensional data, some of the lower ranked variables could be eliminated to reduce the dimensions.

**Forward selection: **In building Multi-linear regression models with high dimensional data, a process can be followed in which, at the beginning, only one attribute is selected to build the regression model. later the remaining attributes are added one by one and tested for their worthiness using ‘Adjusted-R^{2}’ values. If the Adjusted-R^{2} shows a noticeable improvement then the variable is retained else it is discarded.

**Feature Extraction Techniques **

In feature extraction techniques, the high dimensional attributes are combined in low dimensional components (PCA or ICA) or factored into low dimensional factors (FA).

**Principal Component Analysis (PCA)**

Principal Component Analysis, or PCA, is a dimensionality-reduction technique in which high dimensional correlated data is transformed to a lower dimensional set of uncorrelated components, referred to as principal components. The lower dimensional principle components capture most of the information in the high dimensional dataset. An ‘n’ dimensional data is transformed into ‘n’ principle components and a subset of these ‘n’ principle components is selected based on the percentage of variance in the data intended to be captured through the principle components. Figure 5 shows a simple example in which a 10-dimensional data is transformed to 10-principle components. To capture 90% of the variance in the data only 3 principle components are needed. Hence, we have reduced a 10-dimensional data to 3-dimensions.

**Figure 5. Example of converting 10-dimensional data to 3-dimensional data through PCA**

**Factor Analysis (FA)**

Factor analysis is based on the assumption that all the observed attributes in a dataset can be represented as a weighted linear combination of latent factors. The intuition in this technique is that an ‘n’ dimensional data can be represented by ‘m’ factors (m<n). The main difference between PCA and FA is in the fact that While PCA synthesizes components from the base attributes, FA decomposes the attributes into latent factors as shown in figure 6.

**Independent Component Analysis (ICA)**

ICA assumes that all the attributes are essentially a mixture of independent components and resolves the variables into a combination of these independent components. ICA is perceived to be more robust than PCA is generally used when PCA and FA fail.

**Reference**

[2] On the Surprising Behavior of Distance Metrics

If you found this helpful, and wish to learn more such concepts, join Great Learning Academy’s free online courses today!

0