美文网首页
Gaussian Models

Gaussian Models

作者: 水豚2号 | 来源:发表于2020-03-06 14:14 被阅读0次

    Basic Knowledge

    • Normal Distribution / Multivariate Gaussian
      \mathcal N(x|\mu, \sigma) = ······
    • Eigenvalue Decomposition
    • Estimate MLE for \mu, \sigma
    • Same \mu, \sigma, Gaussian Distribution has Maximum Entropy
    • Joint Gaussian Distribution

    Interpolation

    • x_{i} = f(t_{i}), t_{i} \in [0, T]
      Data points: total D , observed N
    • Suppose we can fit the data, then using smoothness assumption x_{j} = \frac{1}{2} (x_{j-1} + x_{j+1}) + \epsilon_{j}, we get LX=\epsilon, X=L^{-1}\epsilon
    • X \sim N(0, \sigma^{2}(L^{T}L)^{-1})

    Noise-free Observation

    1. Partition X to [X_{1}, X_{2}], L to [L_{1}, L_{2}], where X_{2} are N observed noise-free observations. If observations are not adjacent data points, we can adjust X along with L
    2. Then, X_{1} \sim N(0, \sigma^{2}(L_{1}^{T}L_{1})^{-1}), X_{2} \sim N(0, \sigma^{2}(L_{2}^{T}L_{2})^{-1})
    3. Use formula, get conditional distribution, aka., X_{1}|X_{2}'s distribution f_{(D-N) * 1} (unknown data's distribution)
    4. Generate f(t)

    Noisy Observation

    1. Observed N noisy data y_{i} = x_{i} + \eta_{i}
    2. Use selection matrix A, such that Y_{D*1}=A_{D*D} * X_{D*1}+\eta
    3. Now, X \sim N(0, \sigma^{2}(L^{T}L)^{-1}), Y|X \sim N(AX, a^{2}I)
    4. Use theorem, get posterior distribution, aka., X|Y's distribution f_{D*1} (whole data's distribution)
    5. Generate f(t)

    相关文章

      网友评论

          本文标题:Gaussian Models

          本文链接:https://www.haomeiwen.com/subject/mypqrhtx.html