Introduction to Regularization
During the Machine Learning model building, the Regularization Technique is an unavoidable and important step to improve the model prediction and reduce errors. This is also called the Shrinkage method. Which we use to add the penalty term to control the complex model to avoid overfitting by reducing the variance.
Let’s discuss the available methods, implementation, and benefits of those in detail here.
The too many parameters/attributes/features along with data and those permutations and combinations of attributes would be sufficient, to capture the possible relationship within dependent and independent variables.
To understand the relationship between the target and available independent variables with several permutations and combinations. For this exercise certainly, we need adequate data in terms of records or datasets for our analysis, hope you are aware of that.
If you have fewer data with huge attributes the depth vs breadth analysis there might lead to that not all possible permutations and combinations among the dependent and independent variables. So those missing values force good or bad into your model. Of course, we can call out this circumstance as Curse of Dimensionality. Here we are looking for these aspects from data along with parameters/attributes/features.
The curse of Dimensionality is not directly mean that too many dimensions, this is the lack of possible permutation and combination.
In another way round the missing data and gap generates empty space, so we couldn’t connect the dots and create the perfect model. It means that the algorithm cannot understand the data and spread across with given space or empty, with multi-dimensional mode and meets with a kind of relationship between dependent and independent variables and predicting the future data. If you try to visualize this, it would be a really complex format and difficult to follow.
During the training, you will get the above-said observation, but during the testing, the new and not exposed data combination to the model’s accuracy will jump across and it suffers from error, because of variance [variance error] and not fit for production move and risk for prediction.
Due to the too many Dimensions with too little data, the algorithm would build the best fit with peaks and deep-down dells in the observation along with the high magnitude of the coefficient which leads to overfitting and is not suitable for production. [drastic fluctuation in surface inclination]
To understand or implement these techniques, we should understand the cost function of your linear models.
Understanding the Regression Graph
The below graph represents the entire parameters existing in the LR model and is very self-explanatory.
Significance Of Cost Function
Cost function/Error function: Takes slope-intercept (m and c) values and returns the error value/cost value. It shows the error between predicted outcomes is compared with the actual outcomes. It explains how your model is inaccurate in its prediction.
It is used to estimate how badly models are performing for the given dataset and its dimensions.
Why is cost function important in machine learning? Yes, the cost function helps us reach the optimal solution, So how can we do this. will see all possible methods and simple steps using Python libraries.
This function helps us to a figure-out best straight line by minimizing the error
The best fit line is that line where the sum of squared errors around the line is minimized
Regularization Techniques
Let’s discuss the available Regularization techniques and followed the implementation
1. Ridge Regression (L2 Regularization):
Basically here, we’re going to minimize the sum of squared errors and the sum of the squared coefficients (β). In the background,
the coefficients (β) with a large magnitude will generate the graph peak and
deep slope, to suppress this we’re using the lambda (λ) use to be called a
Penalty Factor and help us to get a smooth surface instead of an irregular graph. Ridge Regression is used to push the coefficients(β) value nearing zero in terms of magnitude. This is L2 regularization, since its adding a penalty equivalent to the Square-of-the Magnitude of coefficients.
Ridge Regression = Loss function + Regularized term
2. Lasso Regression (L1 Regularization):
This is very similar to Ridge Regression, with little difference in Penalty Factor the coefficient is magnitude instead of squared. In which there are possibilities of many coefficients becoming zero, so that corresponding attributes/features become zero and dropped from the list, this ultimately reduces the dimensions and supports dimensionality reduction. So which deciding that those attributes/features are not suitable as predators for predicting target value. This is L1 regularization, because of adding the Absolute-Value as penalty-equivalent to the magnitude of coefficients.
Lasso Regression = Loss function + Regularized term
3. Characteristics of Lambda
λ = 0 | λ => Minimal | λ =>High | |
Lambda or Penalty Factor (λ) | No impact on coefficients(β) and the model would be Overfit. Not suitable for Production | Generalised model and acceptable accuracy and eligible for Test and Train. Fit for Production | Very high impact on coefficients (β) and leading to underfit. Ultimately not fit for Production. |
Remember one thing the Ridge never makes coefficients into zero, Lasso will do. So, you can use the second one for feature selection.
Impact of Regularization
The below graphical representation clearly indicates the best fitment.
4. Elastic-Net Regression Regularization:
Even though Python provides excellent libraries, we should understand the mathematics behind this. Here is the detailed derivation for your reference.
- Ridge: α=0
- Lasso: α=1
5. Pictorial representation of Regularization Techniques
Mathematical approach for L1 and L2
Even though Python provides excellent libraries and straightforward coding, we should understand the mathematics behind this. Here is the detailed derivation for your reference.
et’s have below the multi-linear regression dataset and its equation
As we know Multi-Linear-Regression
y=β0+ β1 x1+ β2 x2+………………+ βn xn —————–1
yi= β0+ Σ βi xi —————–2
Σ yi– β0– Σ βi xi
Cost/Loss function: Σ{ yi– β0– Σ βi xij}2—————–3
Regularized term: λΣ βi2—————-4
Ridge Regression = Loss function + Regularized term—————–5
Put 3 and 4 in 5
Ridge Regression = Σ { yi– β0– Σ βi xij}2+λ Σ βi2
Lasso Regression = Σ { yi– β0– Σ βi xij}2+λ Σ |βi|
- x ==> independent variables
- y ==> target variables
- β ==> coefficients
- λ ==> penalty-factor
How coefficients(β) are calculated internally
Code for Regularization
Let’s take Automobile – Predictive Analysis and apply the L1 and L2 and how it helps model scores.
Objective: Predict the Mileage/Miles Per Gallon (mpg) of a car using the given features of the car.
print("*************************") print("Import required libraries") print("*************************") %matplotlib inline import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt from sklearn.linear_model import LinearRegression from sklearn.linear_model import Ridge from sklearn.linear_model import Lasso from sklearn.metrics import r2_score Output print("*************************") print("Import required libraries") print("*************************") %matplotlib inline import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt from sklearn.linear_model import LinearRegression from sklearn.linear_model import Ridge from sklearn.linear_model import Lasso from sklearn.metrics import r2_score Output
************************* Using auto-mpg dataset *************************
EDA: Will do a little EDA (Exploratory Data Analysis), to understand the dataset
print("############################################") print(" Info Of the Data Set") print("############################################") df_cars.info()
Observation:
1. we could see that the features and their data type, along with Null constraints.
2. Horsepower and name features are objects in the given data set. that have to take care of during the modelling.
Data Cleaning/Wrangling:
Is the process of cleaning and consolidating complex data sets for easy access and analysis.
- Action:
- replace(‘?’,’NaN’)
- Converting “horsepower” Object type into int
df_cars.horsepower = df_cars.horsepower.str.replace('?','NaN').astype(float) df_cars.horsepower.fillna(df_cars.horsepower.mean(),inplace=True) df_cars.horsepower = df_cars.horsepower.astype(int) print("######################################################################") print(" After Cleaning and type covertion in the Data Set") print("######################################################################") df_cars.info()
Output
Observation:
1. We could see that the features/columns/fields and their data type, along with the Null count
2. horsepower is now int type and the name is still an object type in the given data set since this column not going to support either way as preditors.
# Statistics of the data display(df_cars.describe().round(2))
# Skewness and kurtosis print("Skewness: %f" %df_cars['mpg'].skew()) print("Kurtosis: %f" %df_cars['mpg'].kurt())
Output: Look at the curve and how it is distributed across and see the same.
Skewness: 0.457066 Kurtosis: -0.510781
sns_plot = sns.distplot(df_cars["mpg"])
plt.figure(figsize=(10,6)) sns.heatmap(df_cars.corr(),cmap=plt.cm.Reds,annot=True) plt.title('Heatmap', fontsize=13) plt.show()
Output: Look at the heatmap
There is a strong NEGATIVE correlation between mpg and the below features
- Displacement
- Horsepower
- Weight
- Cylinders
So, if those variables increase, the mpg will decrease.
Feature Selection
print("Predictor variables") X = df_cars.drop('mpg', axis=1) print(list(X.columns)) print("Dependent variable") y = df_cars[['mpg']] print(list(y.columns))
Output: Here is the Feature Selection
Predictor variables
[‘cylinders’, ‘displacement’, ‘horsepower’, ‘weight’, ‘acceleration’, ‘model_year’, ‘origin_america’, ‘origin_asia’, ‘origin_europe’] Dependent variable
[‘mpg’]
Scaling the feature to align the data
from sklearn import preprocessing print("Scale all the columns successfully done") X_scaled = preprocessing.scale(X) X_scaled = pd.DataFrame(X_scaled, columns=X.columns) y_scaled = preprocessing.scale(y) y_scaled = pd.DataFrame(y_scaled, columns=y.columns)
Output
Scale all the columns successfully done
Train and Test Split
from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X_scaled, y_scaled, test_size=0.25, random_state=1)
LinearRegression Fit and finding the coefficient.
regression_model = LinearRegression() regression_model.fit(X_train, y_train) for idcoff, columnname in enumerate(X_train.columns): print("The coefficient for {} is {}".format(columnname, regression_model.coef_[0][idcoff]))
Output: Try to understand the coefficient (βi)
The coefficient for cylinders is -0.08627732236942003 The coefficient for displacement is 0.385244857729236 The coefficient for horsepower is -0.10297215401481062 The coefficient for weight is -0.7987498466220165 The coefficient for acceleration is 0.023089636890550748 The coefficient for model_year is 0.3962256595226441 The coefficient for origin_america is 0.3761300367522465 The coefficient for origin_asia is 0.43102736614202025 The coefficient for origin_europe is 0.4412719522838424
intercept = regression_model.intercept_[0] print("The intercept for our model is {}".format(intercept)) Output
The intercept for our model is 0.015545728908811594
Scores (LR)
print(regression_model.score(X_train, y_train)) print(regression_model.score(X_test, y_test))
Output
0.8140863295352218
0.843164735865974
Now, will apply regularization techniques and review the scores and impact of the techniques on the model.
Create a Regularized RIDGE Model and coefficients.
ridge = Ridge(alpha=.3) ridge.fit(X_train,y_train) print ("Ridge model:", (ridge.coef_))
Output: Compare with LR model coefficient
Ridge model: [[-0.07274955 0.3508473 -0.10462368 -0.78067332 0.01896661 0.39439233
0.29378926 0.36094062 0.37375046]]
Create a Regularized LASSO Model and coefficients
lasso = Lasso(alpha=0.1) lasso.fit(X_train,y_train) print ("Lasso model:", (lasso.coef_))
Output: Compare with the LR model coefficient and RIDGE, Here you could see that the few coefficients and zeroed (0) and during the fitment, they are excluded from the feature list.
Lasso model: [-0. -0. -0.01262531 -0.6098498 0. 0.29478559
-0.03712132 0. 0. ]
Scores (RIDGE)
print(ridge.score(X_train, y_train)) print(ridge.score(X_test, y_test))
Output
0.8139778320249321 0.8438110638424217
Scores (LASSO)
print(lasso.score(X_train, y_train)) print(lasso.score(X_test, y_test))
Output
0.7866202435701324 0.8307559551832127
LR | RIDGE (L2) | LASSO (L1) |
81%84% | 81.4% 84.5% | 79.0% 83.0% |
Certainly, there is an impact on the model due to the Regularization of L2 and L1.
Compare L2 and L1 Regularization
Hope after seeing the code-level implementation, you could able to relate the importance of regularization techniques and their influence on the model improvements. As a final touch let’s compare the L1 & L2.
Ridge Regression (L2) | Lasso Regression(L1) |
Quite accurate and keep all features | More Accurate than Ridge |
λ ==> Sum of the squares of coefficient | λ ==> Sum of the absolute of coefficient. |
The coefficient can be not to zeroed, but rounded | The coefficient can be zeroed |
Variable selection and keeping all variables | Model selection by dropping coefficient |
Differentiable and leading for gradient descent calculation | Not differentiable |
Model fitment justification during training and testing
- Model is doing strongly in the training set and poorly in the test set means we’re at OVERFIT
- Model is doing poor at both (Training and Testing) which means we’re at UNDERFIT
- Model is doing better and considers ways in both (Training and Test), which means we’re at the RIGHT FIT
Conclusion
I hope, what we have discussed so far, would really help you all how and why regularization techniques are important and inescapable while building a model. Thanks for your valuable time in reading this article. Will get back to you with some interesting topics. Until then Bye! Cheers! Shanthababu.