- DSP log - http://www.dsplog.com -

Closed form solution for linear regression

Posted By Krishna Sankar On December 4, 2011 @ 6:07 pm In DSP | 10 Comments

In the previous post on Batch Gradient Descent [1] and Stochastic Gradient Descent [2], we looked at two iterative methods for finding the parameter vector $\theta$ which minimizes the square of the error between the predicted value $h_{\theta}(x)$ and the actual output $y$ for all $j$ values in the training set.

A closed form solution for finding the parameter vector $\theta$ is possible, and in this post let us explore that. Ofcourse, I thank Prof. Andrew Ng for putting all these material available on public domain (Lecture Notes 1 [3]).

## Notations

Let’s revisit the notations.

$m$ be the number of training set (in our case top 50 articles),

$x$ be the input sequence (the page index),

$y$ be the output sequence (the page views for each page index)

$n$ be the number of features/parameters (=2 for our example).

The value of $(x^j,y^j)$ corresponds to the $j^{th}$ training set

The predicted the number of page views for a given page index using a hypothesis $h_{\theta}(x)$ defined as :

$\begin{array}{lll}h_{\theta}(x)&=&\theta{_0}x_0 + \theta{_1}x_1\\&=&\sum_{i=0}^{n-1}\theta{_i}x_i\\&=&\theta^Tx\\\end{array}$

where,

$x_1$ is the page index,

$x_0\ = \ 1$.

Formulating in matrix notations:

The input sequence is,

$X = $\begin{array}{mm}x_0^1 & x_1^1 \\x_0^2 & x_1^2\\ \vdots & \vdots \\x_0^m & x_1^m\end{array}$$ of dimension [m x n]

The measured values are,

$Y = $\begin{array}{m}y^1\\y^2\\\vdots \\y^m\end{array}$$ of dimension [m x 1].

The parameter vector is,

$\theta=$\begin{array}{m}\theta_0\\\theta_1\end{array}$$ of dimension [n x 1]

The hypothesis term is,

$H_\theta(X)=X\theta= $\begin{array}{mm}x_0^1 & x_1^1 \\x_0^2 & x_1^2\\ \vdots & \vdots \\x_0^m & x_1^m\end{array}$$\begin{array}{m}\theta_0\\\theta_1 \end{array}$$ of dimension [m x 1].

From the above,

$H_\theta(X)-Y=X\theta-Y= $\begin{array}{mm}h_\theta(x^1)-y^1\\h_\theta(x^2)-y^2\\\vdots\\h_\theta(x^m)-y^m\end{array}$$.

Recall :

Our goal is to find the parameter vector $\theta$ which minimizes the square of the error between the predicted value $h_{\theta}(x)$ and the actual output $y$ for all $j$ values in the training set i.e.

$\min_{\theta} \sum_{j=1}^m$h_{\theta}(x^j) - y^j$^2$.

From matrix algebra, we know that

$\sum_{j=1}^m$h_{\theta}(x^j) - y^j$^2=$$X\theta-Y$$^T$$X\theta-Y$$$.

So we can now go about to define the cost function $J$$\theta$$$ as,

$J$$\theta$$=\frac{1}{2}\sum_{j=1}^m$h_{\theta}(x^j) - y^j$^2 = \frac{1}{2}$$X\theta-Y$$^T$$X\theta-Y$$$.

To find the value of $\theta$ which minimizes $J$$\theta$$$, we can differentiate $J$$\theta$$$ with respect to $\theta$.

$\begin{array}{lll}\frac{\partial}{\partial\theta}J$$\theta$$& =& \frac{1}{2}\frac{\partial}{\partial\theta}$$X\theta-Y$$^T$$X\theta-Y$$\\&=&\frac{1}{2}\frac{\partial}{\partial\theta}$$\theta^TX^TX\theta - \theta^TX^TY -Y^TX\theta+Y^TY$$\\&=&$$X^TX\theta - X^TY$$\end{array}}$

To find the value of $\theta$ which minimizes $\theta$,  we set

$\frac{\partial}{\partial\theta}J(\theta)=0$,

$\begin{array}{lll}$$X^TX\theta - X^TY$$&=&0\end{array}$.

Solving,

$\Huge\begin{array}{lll}\theta & = & $$X^TX$$^{-1}X^TY\end{array}$

Note : (Update 7th Dec 2011)

As pointed by Mr. Andre KrishKevich, the above solution is same as the formula for liner least squares fit (linear least squares [4], least square in wiki [5])

## Matlab/Octave code snippet

clear ;
close all;
x = [1:50].';
y = [4554 3014 2171 1891 1593 1532 1416 1326 1297 1266 ...
1248 1052 951 936 918 797 743 665 662 652 ...
629 609 596 590 582 547 486 471 462 435 ...
424 403 400 386 386 384 384 383 370 365 ...
360 358 354 347 320 319 318 311 307 290 ].';

m = length(y); % store the number of training examples
x = [ ones(m,1) x]; % Add a column of ones to x
n = size(x,2); % number of features
theta_vec = inv(x'*x)*x'*y;

The computed $\theta$ values are

$\begin{array}{lll}\theta_0&=&1840.618\\\theta_1&=&-39.820\end{array}$.

Note :

a)

$\begin{array}{lll}\frac{\partial}{\partial\theta}\theta^TX^TX\theta & = & X^TX\theta+XX^T\theta\\ & =&2X^TX\theta\end{array}$

(Refer: Matrix calculus notes [6] - University of Colorado)

b)

$\begin{array}{lll}-\frac{\partial}{\partial\theta}(\theta^TX^TY) &=&-\frac{\partial}{\partial\theta}tr(\theta^TX^TY) \\ & = & -\frac{\partial}{\partial\theta}tr(Y^TX\theta)\\&=&-X^TY\end{array}$

References

Refer: Matrix calculus notes [6] - University of Colorado

URL to article: http://www.dsplog.com/2011/12/04/closed-form-solution-linear-regression/

URLs in this post:

[3] Lecture Notes 1: http://cs229.stanford.edu/notes/cs229-notes1.pdf

[4] linear least squares: http://www.dsplog.com/2007/07/15/straight-line-fit-using-least-squares-estimate/

[5] least square in wiki: http://en.wikipedia.org/wiki/Least_squares

[6] Refer: Matrix calculus notes: http://www.colorado.edu/engineering/cas/courses.d/IFEM.d/IFEM.AppD.d/IFEM.AppD.pdf

[7] Refer : Matrix Calculus Wiki: http://en.wikipedia.org/wiki/Matrix_calculus

[8] An Application of Supervised Learning – Autonomous Deriving (Video Lecture, Class2): http://academicearth.org/lectures/supervised-learning-autonomous-deriving

[9] CS 229 Machine Learning Course Materials: http://cs229.stanford.edu/materials.html