Regression is commonly referred to as a statistical causal analysis in which a potential relationship between variables is identified. In this case, it is evaluated to what extent one variable can influence the other. At the same time, the direction of this influence is determined.
For example, regression analysis can be used to determine how much the dependent variable changes when the independent variable changes by only one measurable unit. This uses a regression equation, which can be constructed manually or using statistical programs, whether SPSS or MS Excel.
Linear regression uses the best-fit line model that best fits the current data set on the scatter plot. In other words, this type of regression analysis helps estimate a linear trend between data. This is great for any quantitative data that is not discrete. Measuring the effect of an individual’s age on their growth for a sample is a textbook example of performing linear regression.
On the other hand, logistic regression allows us to estimate the relationship between discrete sets of dichotomous data expressed in binary code. According to Lowry, unlike linear regression, this type of regression does not allow for predicting set trends beyond the analysis.
However, it can be used to predict a binary outcome, whether it is “1” and “0” or “Yes” or “No.” In this case, the logarithm of odds is used as the dependent variable. Finally, it is essential to say that this type of regression does not work for categorical variables that can take more than two values. For example, if it is about an individual’s race, then polynomial logistic regression may be helpful.