lasso regression
  1. Understanding Lasso Regression
  2. Regularization
  3. Regularization Techniques
  4. Lasso Regression
  5. Mathematical equation of Lasso Regression
  6. Lasso Regression Implementation in Python
  7. Lasso Regression Implementation in R
  8. Lasso Regression Vs Ridge Regression

Contributed by: Dinesh Kumar

Introduction

In this blog, we will see the techniques used to overcome overfitting for a regression model. Regularization is one of the methods widely used to make your model more generalized.

Regularization

Regularization is an important concept that is used to avoid overfitting of the data, especially when the trained and test data are much varying.

Regularization is implemented by adding a “penalty” term to the best fit derived from the trained data, to achieve a lesser variance with the tested data and also restricts the influence of predictor variables over the output variable by compressing their coefficients.

In regularization, what we do is normally we keep the same number of features but reduce the magnitude of the coefficients. We can reduce the magnitude of the coefficients by using different types of regression techniques which uses regularization to overcome this problem. So, let us discuss them.

Regularization Techniques

There are two main regularization techniques, namely Ridge Regression and Lasso Regression. They both differ in the way they assign a penalty to the coefficients. In this blog, we will try to understand more about Lasso Regularization technique.

lasso regression

Lasso Regression

The “LASSO” stands for Least Absolute Shrinkage and Selection Operator. Lasso regression is a regularization technique. It is used over regression methods for a more accurate prediction. This model uses shrinkage. Shrinkage is where data values are shrunk towards a central point as the mean. The lasso procedure encourages simple, sparse models (i.e. models with fewer parameters). This particular type of regression is well-suited for models showing high levels of multicollinearity or when you want to automate certain parts of model selection, like variable selection/parameter elimination.

Lasso Regression uses L1 regularization technique (will be discussed later in this article). It is used when we have more number of features because it automatically performs feature selection.

Also Read: Python Tutorial for Beginners

Mathematical equation of Lasso Regression

Residual Sum of Squares + λ * (Sum of the absolute value of the magnitude of coefficients)

lasso regression

Where,

  • λ denotes the amount of shrinkage.
  • λ = 0 implies all features are considered and it is equivalent to the linear regression where only the residual sum of squares is considered to build a predictive model
  • λ = ∞ implies no feature is considered i.e, as λ closes to infinity it eliminates more and more features
  • The bias increases with increase in λ
  • variance increases with decrease in λ

Lasso Regression Implementation in Python

For this example code, we will consider a dataset from Machine hack’s Predicting Restaurant Food Cost Hackathon.

About the Data Set

The task here is about predicting the average price for a meal. The data consists of the following features.

Size of training set: 12,690 records

Size of test set: 4,231 records

Columns/Features

TITLE: The feature of the restaurant which can help identify what and for whom it is suitable for.

RESTAURANT_ID: A unique ID for each restaurant.

CUISINES: The variety of cuisines that the restaurant offers.

TIME: The open hours of the restaurant.

CITY: The city in which the restaurant is located.

LOCALITY: The locality of the restaurant.

RATING: The average rating of the restaurant by customers.

VOTES: The overall votes received by the restaurant.

COST: The average cost of a two-person meal.

After completing all the steps till Feature Scaling (Excluding), we can proceed to building a Lasso regression. We are avoiding feature scaling as the lasso regression comes with a parameter that allows us to normalise the data while fitting it to the model.

Also Read: Top Machine Learning Interview Questions

Lets Code!

import numpy as np

Creating a New Train and Validation Datasets

from sklearn.model_selection import train_test_split
data_train, data_val = train_test_split(new_data_train, test_size = 0.2, random_state = 2)

Classifying Predictors and Target

#Classifying Independent and Dependent Features
#_______________________________________________
#Dependent Variable
Y_train = data_train.iloc[:, -1].values
#Independent Variables
X_train = data_train.iloc[:,0 : -1].values
#Independent Variables for Test Set
X_test = data_val.iloc[:,0 : -1].values

Evaluating The Model With RMLSE

def score(y_pred, y_true):
error = np.square(np.log10(y_pred +1) - np.log10(y_true +1)).mean() ** 0.5
score = 1 - error
return score
actual_cost = list(data_val['COST'])
actual_cost = np.asarray(actual_cost)


Building the Lasso Regressor

#Lasso Regression


from sklearn.linear_model import Lasso
#Initializing the Lasso Regressor with Normalization Factor as True
lasso_reg = Lasso(normalize=True)
#Fitting the Training data to the Lasso regressor
lasso_reg.fit(X_train,Y_train)
#Predicting for X_test
y_pred_lass =lasso_reg.predict(X_test)
#Printing the Score with RMLSE
print("\n\nLasso SCORE : ", score(y_pred_lass, actual_cost))


Output

0.7335508027883148

The Lasso Regression attained an accuracy of 73% with the given Dataset.

Also Read: What is Linear Regression in Machine Learning?

Lasso Regression Implementation in R

Let us take “The Big Mart Sales” dataset we have product-wise Sales for Multiple outlets of a chain.

In the dataset, we can see characteristics of the sold item (fat content, visibility, type, price) and some characteristics of the outlet (year of establishment, size, location, type) and the number of the items sold for that particular item. Let’s see if we can predict sales using these features.

Let’s us take a snapshot of the dataset: 

Let’s Code!

Output

Lasso Regression Vs Ridge Regression

Lasso Regression is different from ridge regression as it uses absolute coefficient values for normalization.

As loss function only considers absolute coefficients (weights), the optimization algorithm will penalize high coefficients. This is known as the L1 norm.

In the above image we can see, Constraint functions (blue area); left one is for lasso whereas the right one is for the ridge, along with contours (green eclipse) for loss function i.e, RSS.

In the above case, for both regression techniques, the coefficient estimates are given by the first point at which contours (an eclipse) contacts the constraint (circle or diamond) region.

On the other hand, the lasso constraint, because of diamond shape, has corners at each of the axes hence the eclipse will often intersect at each of the axes. Due to that, at least one of the coefficients will equal zero.

However, lasso regression, when α is sufficiently large, will shrink some of the coefficients estimates to 0. That’s the reason lasso provides sparse solutions.

The main problem with lasso regression is when we have correlated variables, it retains only one variable and sets other correlated variables to zero. That will possibly lead to some loss of information resulting in lower accuracy in our model.

That was Lasso Regularization technique, and I hope now you can comprehend it in a better way. You can use this to improve the accuracy of your machine learning models.

If you found this blog helpful and wish to learn more such concepts, you can join Great Learning Academy’s free online courses today.

0

LEAVE A REPLY

Please enter your comment!
Please enter your name here

twelve − one =