Linear regression beta obtained from Hilbert Space projection
Here’s a pretty cool and useless way to understand how to obtain the $\beta$ in linear regression. Let’s begin with (real valued) Hilbert space, $\mathcal H$ which comes with inner product $\lang \cdot, \cdot\rang$ that are symmetric, linear and positive definite. (Might as well think classical $x, y\in \R^m$ and $\lang x, y\rang = x^Ty = \sum_{i \in [m]}x_iy_i$ in standard linear algebra) Now suppose we are given $y\in \mathcal H$ and $X := [x_1, x_2, \ldots, x_N]\in \mathcal H^N$. The way to “project” $y$ onto $x$ is ...