Skip to main content

accuracy and interpretation of multiple linear regression


Introduction:

You have completed your linear regression fitting and prediction. But you want to know how to represent the accuracy of the linear regression. We discuss different accuracy metrics of linear regression in this post.

(1) adjusted R-square and mean-squared-error:

  In linear regression, the R-square is the measure of the accuracy of linear regression. The R-square can be from 0 to 1 i.e. it can also be interpreted as from 0 to 100%. Roughly speaking, R-square denotes the amount of variance in the data which is described by the linear regression. The more amount of variance gets described by the linear regression, the better the regression is. So, to describe how efficient a linear model-fitting has been, one can depend on how high the adjusted R-square percentage is. 
While adjusted R-square is accuracy-measure from the statistical point of view, the more application point of view is the mean-squared-error (mse). Mean-squared-error is the mean of the squares of the errors in prediction using the linear model over training data. A small mse means that when the linear regression is used, the error in prediction which happens is going to be near root of mse i.e. rmse. So, the smaller mse directly presents a smaller error in prediction.

(2) coefficients and their significance:

In linear regression, the coefficients of an independent variable denote the effect of the independent variable on the dependent variable. i.e. consider that in a regression problem, the coefficient of X is 0.2, which in simple terms denote that on 1 increment of X, Y will increase by 0.2. While this is roughly correct, assumptions of independence of the independent variables are not often strongly held in practical applications, but we will come to that later. Also, sometimes, we try to see whether coefficients are significant statistically. In that case, we run a linear regression and check the p-value of the coefficients. If p-values are higher than 0.05, then one can say that the corresponding coefficients are not significant.

So these are mainly two ways in which you can discuss the results of your linear regression. You can either talk about the overall performance of the linear regression or discuss the significance of the predictor variables one vs one. There are a few other techniques which can be a bit business-specific.


median correlation metrics:

Sometimes, you will be more concerned about maintaining the order of linear regression prediction with the original target variable. In such cases, you will have to consider the average correlation across multiple folds of data and then report that as median correlation.

bucket based rmse:

More than often in business, it is not only enough to have rmse good in overall, but you also have to check that whether your rmse holds for all the buckets of the target variable. i.e. let's say you are trying to estimate network traffic for some telecom company. Now, the value varies over time from quite low to quite high. Now, your overall estimation should have lesser rmse, as well as there should be lesser rmse for high values as well as low values. If someone doesn't properly take care of the regression model, then it may happen so that the model predicts quite inaccurate values for lower values but it is well enough for higher values. So bucket wise rmse is also important.

There can be many other use-cases for interpreting and checking the accuracy of multiple linear regression. But I have mentioned the theoretical and two of the common business-related cases I have come across. Please comment, share and subscribe to my blog to read more such content. Also to know more about linear regression, follow this linear regression blog written by me. Thanks for reading!

Comments

Popular posts from this blog

Tinder bio generation with OpenAI GPT-3 API

Introduction: Recently I got access to OpenAI API beta. After a few simple experiments, I set on creating a simple test project. In this project, I will try to create good tinder bio for a specific person.  The abc of openai API playground: In the OpenAI API playground, you get a prompt, and then you can write instructions or specific text to trigger a response from the gpt-3 models. There are also a number of preset templates which loads a specific kind of prompt and let's you generate pre-prepared results. What are the models available? There are 4 models which are stable. These are: (1) curie (2) babbage (3) ada (4) da-vinci da-vinci is the strongest of them all and can perform all downstream tasks which other models can do. There are 2 other new models which openai introduced this year (2021) named da-vinci-instruct-beta and curie-instruct-beta. These instruction models are specifically built for taking in instructions. As OpenAI blog explains and also you will see in our

Can we write codes automatically with GPT-3?

 Introduction: OpenAI created and released the first versions of GPT-3 back in 2021 beginning. We wrote a few text generation articles that time and tested how to create tinder bio using GPT-3 . If you are interested to know more on what is GPT-3 or what is openai, how the server look, then read the tinder bio article. In this article, we will explore Code generation with OpenAI models.  It has been noted already in multiple blogs and exploration work, that GPT-3 can even solve leetcode problems. We will try to explore how good the OpenAI model can "code" and whether prompt tuning will improve or change those performances. Basic coding: We will try to see a few data structure coding performance by GPT-3. (a) Merge sort with python:  First with 200 words limit, it couldn't complete the Write sample code for merge sort in python.   def merge(arr, l, m, r):     n1 = m - l + 1     n2 = r- m       # create temp arrays     L = [0] * (n1)     R = [0] * (n

What is Bort?

 Introduction: Bort, is the new and more optimized version of BERT; which came out this october from amazon science. I came to know about it today while parsing amazon science's news on facebook about bort. So Bort is the newest addition to the long list of great LM models with extra-ordinary achievements.  Why is Bort important? Bort, is a model of 5.5% effective and 16% total size of the original BERT model; and is 20x faster than BERT, while being able to surpass the BERT model in 20 out of 23 tasks; to quote the abstract of the paper,  ' it obtains performance improvements of between 0 . 3% and 31%, absolute, with respect to BERT-large, on multiple public natural language understanding (NLU) benchmarks. ' So what made this achievement possible? The main idea behind creation of Bort is to go beyond the shallow depth of weight pruning, connection deletion or merely factoring the NN into different matrix factorizations and thus distilling it. While methods like knowle