The OLS estimator is based on six assumptions. The OLS is the most suitable estimator we can employ, and five of them embody the requirements that make it a reliable estimator. These circumstances are:

- Linearity
- Full rank
- Regression model
- Spherical errors
- Non-stochastic regressors

If the five assumptions are true, the OLS estimator is trustworthy because it possesses some statistical characteristics, such as efficiency, unbiasedness, and consistency, that give us the correct framework to draw conclusions about the population from the outcomes of our estimation of the parameters on the sample size. This thus makes it possible to extrapolate the findings from the sample study to the larger context of the population. If so, the OLS estimator is referred to as being BLUE, which stands for:

__B____L____U____E__est - The OLS estimator's efficiency, which indicates that it has the least variation among all unbiased and liner estimators.__B__inear - A linear combination of the error term constitutes our estimator for `\beta`.__L__nbiased - On average, the parameter's estimated value will match the population parameter (i.e., its true value).__U__stimator - `\hat{\beta}` is the best candidate to estimate beta's actual value.__E__

Efficiency and impartiality are qualities of finite samples that hold for any fixed value of

*n*, where*n*is the sample size. If the five presumptions are correct, the OLS estimator is also consistent, which means that there is no chance that the estimate will diverge from the true value in the limit (for an infinite number of observations).Consistency is a big sample property that holds as n approaches infinity while maintaining a fixed

*K*number of explanatory variables. The OLS estimator has the property of linearity, which indicates that the parameter `\hat{\beta}` is linear. If the assumptions of linearity, complete rank, and non-stochastic regressors are true, then the estimator `\hat{\beta}` is a linear function of. By applying the assumptions underpinning the OLS estimator and beginning with the derivation of the parameter b, we may demonstrate that characteristic.Our estimate of `\beta` is as follows:

$$\bf \hat{\beta}= \left(X'X\right)^{-1}X'y$$

The result of expanding the equation is:

$$\bf \hat{\beta}= \left(X'X\right)^{-1}X'\left(X \beta + \epsilon\right)$$

Afterward, by a multiple:

$$\bf \hat{\beta}= \left(X'X\right)^{-1}X'X \beta + \left(X'X\right)^{-1}X'\epsilon$$

Obtain a linear equation now.

$$\bf \hat{\beta}= \beta + \left(X'X\right)^{-1}X'\epsilon$$

The population parameter `\beta` is constant. The only random term in

*X*is the error term `\epsilon` since non-stochastic regressors assume that*X*is a matrix of constant terms. We can then draw the conclusion that `\hat{\beta}` is a linear combination of the error term.