Skip to article frontmatterSkip to article content
Contents
and

37. Simple Linear Regression Model

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import pyodide_http
pyodide_http.patch_all()

The simple regression model estimates the relationship between two variables xix_i and yiy_i

yi=α+βxi+ϵi,i=1,2,...,Ny_i = \alpha + \beta x_i + \epsilon_i, i = 1,2,...,N

where ϵi\epsilon_i represents the error between the line of best fit and the sample values for yiy_i given xix_i.

Our goal is to choose values for α\alpha and β\beta to build a line of “best” fit for some data that is available for variables xix_i and yiy_i.

Let us consider a simple dataset of 10 observations for variables xix_i and yiy_i:

yiy_ixix_i
1200032
2100021
3150024
4250035
550010
690011
7110022
8150021
9180027
102502

Let us think about yiy_i as sales for an ice-cream cart, while xix_i is a variable that records the day’s temperature in Celsius.

x = [32, 21, 24, 35, 10, 11, 22, 21, 27, 2]
y = [2000,1000,1500,2500,500,900,1100,1500,1800, 250]
df = pd.DataFrame([x,y]).T
df.columns = ['X', 'Y']
df

We can use a scatter plot of the data to see the relationship between yiy_i (ice-cream sales in dollars ($'s)) and xix_i (degrees Celsius).

:label: slr-plot-fig-1

ax = df.plot(
    x='X', 
    y='Y', 
    kind='scatter', 
    ylabel='Ice-cream sales ($\'s)', 
    xlabel='Degrees celcius'
)

Figure 1:Scatter plot

as you can see the data suggests that more ice-cream is typically sold on hotter days.

To build a linear model of the data we need to choose values for α\alpha and β\beta that represents a line of “best” fit such that

yi^=α^+β^xi\hat{y_i} = \hat{\alpha} + \hat{\beta} x_i

Let’s start with α=5\alpha = 5 and β=10\beta = 10

α = 5
β = 10
df['Y_hat'] = α + β * df['X']
:label: slr-plot-fig-2

fig, ax = plt.subplots()
ax = df.plot(x='X',y='Y', kind='scatter', ax=ax)
ax = df.plot(x='X',y='Y_hat', kind='line', ax=ax)
plt.show()

Figure 2:Scatter plot with a line of fit

We can see that this model does a poor job of estimating the relationship.

We can continue to guess and iterate towards a line of “best” fit by adjusting the parameters

β = 100
df['Y_hat'] = α + β * df['X']
:label: slr-plot-fig-3

fig, ax = plt.subplots()
ax = df.plot(x='X',y='Y', kind='scatter', ax=ax)
ax = df.plot(x='X',y='Y_hat', kind='line', ax=ax)
plt.show()

Figure 3:Scatter plot with a line of fit #2

β = 65
df['Y_hat'] = α + β * df['X']
:label: slr-plot-fig-4

fig, ax = plt.subplots()
ax = df.plot(x='X',y='Y', kind='scatter', ax=ax)
ax = df.plot(x='X',y='Y_hat', kind='line', ax=ax, color='g')
plt.show()

Figure 4:Scatter plot with a line of fit #3

However we need to think about formalizing this guessing process by thinking of this problem as an optimization problem.

Let’s consider the error ϵi\epsilon_i and define the difference between the observed values yiy_i and the estimated values y^i\hat{y}_i which we will call the residuals

e^i=yiy^i=yiα^β^xi\begin{aligned} \hat{e}_i &= y_i - \hat{y}_i \\ &= y_i - \hat{\alpha} - \hat{\beta} x_i \end{aligned}
df['error'] = df['Y_hat'] - df['Y']
df
:label: slr-plot-fig-5

fig, ax = plt.subplots()
ax = df.plot(x='X',y='Y', kind='scatter', ax=ax)
ax = df.plot(x='X',y='Y_hat', kind='line', ax=ax, color='g')
plt.vlines(df['X'], df['Y_hat'], df['Y'], color='r')
plt.show()

Figure 5:Plot of the residuals

The Ordinary Least Squares (OLS) method chooses α\alpha and β\beta in such a way that minimizes the sum of the squared residuals (SSR).

minα,βi=1Ne^i2=minα,βi=1N(yiαβxi)2\min_{\alpha,\beta} \sum_{i=1}^{N}{\hat{e}_i^2} = \min_{\alpha,\beta} \sum_{i=1}^{N}{(y_i - \alpha - \beta x_i)^2}

Let’s call this a cost function

C=i=1N(yiαβxi)2C = \sum_{i=1}^{N}{(y_i - \alpha - \beta x_i)^2}

that we would like to minimize with parameters α\alpha and β\beta.

37.1How does error change with respect to α\alpha and β\beta

Let us first look at how the total error changes with respect to β\beta (holding the intercept α\alpha constant)

We know from the next section the optimal values for α\alpha and β\beta are:

β_optimal = 64.38
α_optimal = -14.72

We can then calculate the error for a range of β\beta values

errors = {}
for β in np.arange(20,100,0.5):
    errors[β] = abs((α_optimal + β * df['X']) - df['Y']).sum()

Plotting the error

:label: slr-plot-fig-6

ax = pd.Series(errors).plot(xlabel='β', ylabel='error')
plt.axvline(β_optimal, color='r');

Figure 6:Plotting the error

Now let us vary α\alpha (holding β\beta constant)

errors = {}
for α in np.arange(-500,500,5):
    errors[α] = abs((α + β_optimal * df['X']) - df['Y']).sum()

Plotting the error

:label: slr-plot-fig-7

ax = pd.Series(errors).plot(xlabel='α', ylabel='error')
plt.axvline(α_optimal, color='r');

Figure 7:Plotting the error (2)

37.2Calculating optimal values

Now let us use calculus to solve the optimization problem and compute the optimal values for α\alpha and β\beta to find the ordinary least squares solution.

First taking the partial derivative with respect to α\alpha

Cα[i=1N(yiαβxi)2]\frac{\partial C}{\partial \alpha}[\sum_{i=1}^{N}{(y_i - \alpha - \beta x_i)^2}]

and setting it equal to 0

0=i=1N2(yiαβxi)0 = \sum_{i=1}^{N}{-2(y_i - \alpha - \beta x_i)}

we can remove the constant -2 from the summation by dividing both sides by -2

0=i=1N(yiαβxi)0 = \sum_{i=1}^{N}{(y_i - \alpha - \beta x_i)}

Now we can split this equation up into the components

0=i=1Nyii=1Nαβi=1Nxi0 = \sum_{i=1}^{N}{y_i} - \sum_{i=1}^{N}{\alpha} - \beta \sum_{i=1}^{N}{x_i}

The middle term is a straight forward sum from i=1,...Ni=1,...N by a constant α\alpha

0=i=1NyiNαβi=1Nxi0 = \sum_{i=1}^{N}{y_i} - N*\alpha - \beta \sum_{i=1}^{N}{x_i}

and rearranging terms

α=i=1Nyiβi=1NxiN\alpha = \frac{\sum_{i=1}^{N}{y_i} - \beta \sum_{i=1}^{N}{x_i}}{N}

We observe that both fractions resolve to the means yiˉ\bar{y_i} and xiˉ\bar{x_i}

α=yiˉβxiˉ\alpha = \bar{y_i} - \beta\bar{x_i}

Now let’s take the partial derivative of the cost function CC with respect to β\beta

Cβ[i=1N(yiαβxi)2]\frac{\partial C}{\partial \beta}[\sum_{i=1}^{N}{(y_i - \alpha - \beta x_i)^2}]

and setting it equal to 0

0=i=1N2xi(yiαβxi)0 = \sum_{i=1}^{N}{-2 x_i (y_i - \alpha - \beta x_i)}

we can again take the constant outside of the summation and divide both sides by -2

0=i=1Nxi(yiαβxi)0 = \sum_{i=1}^{N}{x_i (y_i - \alpha - \beta x_i)}

which becomes

0=i=1N(xiyiαxiβxi2)0 = \sum_{i=1}^{N}{(x_i y_i - \alpha x_i - \beta x_i^2)}

now substituting for α\alpha

0=i=1N(xiyi(yiˉβxiˉ)xiβxi2)0 = \sum_{i=1}^{N}{(x_i y_i - (\bar{y_i} - \beta \bar{x_i}) x_i - \beta x_i^2)}

and rearranging terms

0=i=1N(xiyiyiˉxiβxiˉxiβxi2)0 = \sum_{i=1}^{N}{(x_i y_i - \bar{y_i} x_i - \beta \bar{x_i} x_i - \beta x_i^2)}

This can be split into two summations

0=i=1N(xiyiyiˉxi)+βi=1N(xiˉxixi2)0 = \sum_{i=1}^{N}(x_i y_i - \bar{y_i} x_i) + \beta \sum_{i=1}^{N}(\bar{x_i} x_i - x_i^2)

and solving for β\beta yields

β=i=1N(xiyiyiˉxi)i=1N(xi2xiˉxi)\beta = \frac{\sum_{i=1}^{N}(x_i y_i - \bar{y_i} x_i)}{\sum_{i=1}^{N}(x_i^2 - \bar{x_i} x_i)}

We can now use (12) and (20) to calculate the optimal values for α\alpha and β\beta

Calculating β\beta

df = df[['X','Y']].copy()  # Original Data

# Calculate the sample means
x_bar = df['X'].mean()
y_bar = df['Y'].mean()

Now computing across the 10 observations and then summing the numerator and denominator

# Compute the Sums
df['num'] = df['X'] * df['Y'] - y_bar * df['X']
df['den'] = pow(df['X'],2) - x_bar * df['X']
β = df['num'].sum() / df['den'].sum()
print(β)

Calculating α\alpha

α = y_bar - β * x_bar
print(α)

Now we can plot the OLS solution

:label: slr-plot-fig-8

df['Y_hat'] = α + β * df['X']
df['error'] = df['Y_hat'] - df['Y']

fig, ax = plt.subplots()
ax = df.plot(x='X',y='Y', kind='scatter', ax=ax)
ax = df.plot(x='X',y='Y_hat', kind='line', ax=ax, color='g')
plt.vlines(df['X'], df['Y_hat'], df['Y'], color='r');

Figure 8:OLS line of best fit

Q2: Gather some data from our world in data

You can download a copy of the data here if you get stuck

Q3: Use pandas to import the csv formatted data and plot a few different countries of interest

data_url = "https://raw.githubusercontent.com/QuantEcon/lecture-python-intro/main/lectures/_static/lecture_specific/simple_linear_regression/life-expectancy-vs-gdp-per-capita.csv"
df = pd.read_csv(data_url, nrows=10)
df

You can see that the data downloaded from Our World in Data has provided a global set of countries with the GDP per capita and Life Expectancy Data.

It is often a good idea to at first import a few lines of data from a csv to understand its structure so that you can then choose the columns that you want to read into your DataFrame.

You can observe that there are a bunch of columns we won’t need to import such as Continent

So let’s built a list of the columns we want to import

cols = ['Code', 'Year', 'Life expectancy at birth (historical)', 'GDP per capita']
df = pd.read_csv(data_url, usecols=cols)
df

Sometimes it can be useful to rename your columns to make it easier to work with in the DataFrame

df.columns = ["cntry", "year", "life_expectancy", "gdppc"]
df

We can see there are NaN values which represents missing data so let us go ahead and drop those

df.dropna(inplace=True)
df

We have now dropped the number of rows in our DataFrame from 62156 to 12445 removing a lot of empty data relationships.

Now we have a dataset containing life expectancy and GDP per capita for a range of years.

It is always a good idea to spend a bit of time understanding what data you actually have.

For example, you may want to explore this data to see if there is consistent reporting for all countries across years

Let’s first look at the Life Expectancy Data

le_years = df[['cntry', 'year', 'life_expectancy']].set_index(['cntry', 'year']).unstack()['life_expectancy']
le_years

As you can see there are a lot of countries where data is not available for the Year 1543!

Which country does report this data?

le_years[~le_years[1543].isna()]

You can see that Great Britain (GBR) is the only one available

You can also take a closer look at the time series to find that it is also non-continuous, even for GBR.

le_years.loc['GBR'].plot()

In fact we can use pandas to quickly check how many countries are captured in each year

le_years.stack().unstack(level=0).count(axis=1).plot(xlabel="Year", ylabel="Number of countries");

So it is clear that if you are doing cross-sectional comparisons then more recent data will include a wider set of countries

Now let us consider the most recent year in the dataset 2018

df = df[df.year == 2018].reset_index(drop=True).copy()
df.plot(x='gdppc', y='life_expectancy', kind='scatter',  xlabel="GDP per capita", ylabel="Life expectancy (years)",);

This data shows a couple of interesting relationships.

  1. there are a number of countries with similar GDP per capita levels but a wide range in Life Expectancy

  2. there appears to be a positive relationship between GDP per capita and life expectancy. Countries with higher GDP per capita tend to have higher life expectancy outcomes

Even though OLS is solving linear equations -- one option we have is to transform the variables, such as through a log transform, and then use OLS to estimate the transformed variables.

By specifying logx you can plot the GDP per Capita data on a log scale

df.plot(x='gdppc', y='life_expectancy', kind='scatter',  xlabel="GDP per capita", ylabel="Life expectancy (years)", logx=True);

As you can see from this transformation -- a linear model fits the shape of the data more closely.

df['log_gdppc'] = df['gdppc'].apply(np.log10)
df

Q4: Use (12) and (20) to compute optimal values for α\alpha and β\beta

data = df[['log_gdppc', 'life_expectancy']].copy()  # Get Data from DataFrame

# Calculate the sample means
x_bar = data['log_gdppc'].mean()
y_bar = data['life_expectancy'].mean()
data
# Compute the Sums
data['num'] = data['log_gdppc'] * data['life_expectancy'] - y_bar * data['log_gdppc']
data['den'] = pow(data['log_gdppc'],2) - x_bar * data['log_gdppc']
β = data['num'].sum() / data['den'].sum()
print(β)
α = y_bar - β * x_bar
print(α)

Q5: Plot the line of best fit found using OLS

data['life_expectancy_hat'] = α + β * df['log_gdppc']
data['error'] = data['life_expectancy_hat'] - data['life_expectancy']

fig, ax = plt.subplots()
data.plot(x='log_gdppc',y='life_expectancy', kind='scatter', ax=ax)
data.plot(x='log_gdppc',y='life_expectancy_hat', kind='line', ax=ax, color='g')
plt.vlines(data['log_gdppc'], data['life_expectancy_hat'], data['life_expectancy'], color='r')