Understanding Perfect Separation Error in Logistic Regression and How to Address It
Introduction:
Logistic Regression is a statistical technique used for predicting the probability of a binary outcome (i.e., a variable that can take one of two possible values). It is a form of regression analysis in which the response variable is categorical, usually binary.
In logistic regression, a logistic function is used to model the relationship between a set of independent variables (also called predictor or explanatory variables) and a binary dependent variable. The logistic function is an S-shaped curve that can range from 0 to 1, representing the probability of the dependent variable taking a specific value.
In logistic regression, perfect separation occurs when a predictor variable (or a combination of predictor variables) perfectly predicts the outcome variable. This means that for every observation in the dataset, the value of the predictor variable(s) unambiguously determines the outcome variable.
When perfect separation occurs, the maximum likelihood estimate (MLE) of the logistic regression coefficients becomes infinite, making it impossible to fit the model using standard algorithms. In such cases, the logistic regression model fails to converge and produces what is known as the "perfect separation error".
Real life example:
One common example of perfect separation occurs when there is a single predictor variable that completely determines the outcome variable. For instance, suppose we are trying to predict whether students pass or fail an exam, and one of the predictor variables is whether or not the student attended the review session. If every student who attended the review session passed the exam, and every student who did not attend the review session failed the exam, then there is perfect separation and it is impossible to estimate the effect of other predictors on the outcome variable.
Explanation using a simulation:
About the data:
You can check the dataset here. The data contains 3 columns namely Y, X1, X2 where Y is the dependent variable and is categorical and the other X’s are independent variables.
Simulation:
The following codes are in the R environment.
Input:
data=read.csv("https://raw.githubusercontent.com/GouravDutta-datarlabs/Perfect-separation-error/main/sep_error.csv")
model1<- glm(data$Y~ data$X1+data$X2, family=binomial)
Output:

Let’s decode the above code now:
We are reading the file from the github repository in ‘data’ variable. After that we are trying to fit the logistic regression on given variables of data. But it gave an error message as output.
Similarly in python,
Input:
logit = sm.GLM(Y_train, X_train, family=sm.families.Binomial())
result = logit.fit()
Output:

Again perfect separation error detected.
Simple explanation of the concept
We can see that observations with Y = 0 all have values of X1<=3 and observations with Y = 1 all have values of X1>3. In other words, Y separates X1 perfectly. The other way to see it is that X1 predicts Y perfectly since X1<=3 corresponds to Y = 0 and X1 > 3 corresponds to Y = 1. If we would dichotomize X1 into a binary variable using the cut point of 3, what we get would be just Y.

That is we have found a perfect predictor X1 for the outcome variable Y. In terms of predicted probabilities, we have Prob(Y = 1 | X1<=3) = 0 and Prob(Y=1 | X1>3) = 1, without the need for estimating a model. This is ‘perfect separation error’.
Solutions to overcome perfect separation error:
Solution 1:
model2 <- glm(data$Y~ data$X2, family = binomial)
summary(model2)
Let’s decode the above code now:
To avoid the warning message what we did is by removing the column X1 since it is causing perfect separation. This can be a working solution but it will lead to data loss.
Solution 2:
library(brglm2)
model3 <- glm(data$Y~ data$X1+data$X2, family = binomial, method = "brglmFit")
summary(model3)
Let’s decode the above code now:
To avoid the warning message what we did is fitting logistic regression using the method of bias reduction as denoted by method=’brglmfit2’. The bias is generated by the X1 variable. After reducing the bias generated by X1 the logistic model gets easily fit for the existing data.
Some strategies to deal with perfect separation error:
Removing the offending predictor variable(s) from the model. But it will result in data loss.
Combining levels of the predictor variable(s) so that there is some variability in the data.
Using bias reduction approach: The idea behind these techniques is to introduce a small amount of bias in the parameter estimates in order to stabilize the model and improve its performance. One of the most commonly used bias reduction techniques is Firth's logistic regression, also known as penalized maximum likelihood estimation.
Using penalized likelihood methods, such as ridge regression or lasso regression. These methods add a penalty term to the likelihood function, which can help to stabilize the estimates and reduce the impact of the offending predictor variable(s).
Bayesian logistic regression is another approach for dealing with perfect separation. Bayesian methods can handle perfect separation by incorporating prior information about the parameters into the model.
Conclusion:
In this blog we tried to explain what is a perfect separation error? and methods to solve them . In conclusion we may say that perfect separation in logistic regression is an important issue to be aware of when fitting models to data. By understanding the causes and consequences of perfect separation, and by employing appropriate strategies for dealing with it, researchers can ensure that their models are robust and accurate.