Skip to content

Latest commit

 

History

History
51 lines (39 loc) · 1.55 KB

File metadata and controls

51 lines (39 loc) · 1.55 KB
author title documentclass
Wu Zhenyu (SA21006096)
Homework 3 for Statistical Learning
article

7

If the basis function is constant, i.e. $\phi(\mathbf{x}) = 1$, calculate the corresponding equivalent kernel function.


$$\kappa(\mathbf{x}_1, \mathbf{x}_2) = {\phi(\mathbf{x}_1)}^\mathsf{T}\phi(\mathbf{x}_2) = 1$$

9

Solve the regularized weighted least squares problem:

$$\min_\mathbf{w}\frac12\sum_{i = 1}^N r_i{(y_i - \mathbf{w} \cdot \mathbf{x}_i)}^2

  • \frac\lambda2\lVert \mathbf{w}\rVert_2^2$$

where $r_i > 0$ is the weight of $(\mathbf{x}_i , y_i)$.


$$\begin{aligned} \mathrm{d}\Big(\frac12\sum_{i = 1}^N r_i{(y_i - \mathbf{w} \cdot \mathbf{x}_i)}^2

  • \frac\lambda2\lVert \mathbf{w}\rVert_2^2\Big) & = \mathrm{d}\mathbf{w}^\mathsf{T} \Big(\sum_{i = 1}^N r_i(\mathbf{w}^\mathsf{T}\mathbf{x}_i - y_i)\mathbf{x}_i
  • \lambda\mathbf{w}\Big) = 0 \ \lambda\mathbf{w} + \sum_{i = 1}^N r_i\mathbf{x}_i\mathbf{x}i^\mathsf{T}\mathbf{w} & = \sum{i = 1}^N r_i y_i \mathbf{x}_i \ \mathbf{w} & = \Big(\lambda I
  • \sum_{i = 1}^N r_i\mathbf{x}_i\mathbf{x}i^\mathsf{T}\Big)^{-1} \sum{i = 1}^N r_i y_i \mathbf{x}_i \end{aligned}$$

16*

Randomly generate 100 datasets, each of which consists of 25 points that are samples of $y = \sin(2\pi x) + e$, where $x \in {0.041 \times i, i = 0, 1, \ldots, 24}$, and $e$ is additive white Gaussian noise with $\mathscr{N}(0, 0.3^2)$. Perform ridge regression on each dataset with 7th-order polynomial (with 8 free parameters) with different values of $\lambda$. Observe the results with respect to $\lambda$.