gaussian process

Tags: #math #gaussian process

Equation

logp(y|X)[yT(K+σ2I)1y+log|K+σ2I|]f(X)=\[f(x1),f(x2),...,f(xN))\]TN(μ,KX,X)f|X,X,yN(E(f),cov(f))cov(f)=KX,XKX,X[KX,X+σ2I]1KX,X

Latex Code

1
2
3
4
                                 \log p(y|X) \propto -[y^{T}(K + \sigma^{2}I)^{-1}y+\log|K + \sigma^{2}I|] \\
f(X)=\[f(x_{1}),f(x_{2}),...,f(x_{N}))\]^{T} \sim \mathcal{N}(\mu, K_{X,X}) \\
f_{*}|X_{*},X,y \sim \mathcal{N}(\mathbb{E}(f_{*}),\text{cov}(f_{*})) \\
\text{cov}(f_{*})=K_{X_{*},X_{*}}-K_{X_{*},X}[K_{X,X}+\sigma^{2}I]^{-1}K_{X,X_{*}}

Have Fun

Let's Vote for the Most Difficult Equation!

Introduction

Equation


Joint Gaussian Distribution assumption

Probabilistic framework for GP

Prediction on new unseen data

Latex Code

1
2
3
4
5
6
7
8
9
10
11
12
// Joint Gaussian Distribution assumption
f(X)=\[f(x_{1}),f(x_{2}),...,f(x_{N}))\]^{T} \sim \mathcal{N}(\mu, K_{X,X})
 
// Probabilistic framework for GP
\log p(y|X) \propto -[y^{T}(K + \sigma^{2}I)^{-1}y+\log|K + \sigma^{2}I|]
 
// Prediction on new unseen data
f_{*}|X_{*},X,y \sim \mathcal{N}(\mathbb{E}(f_{*}),\text{cov}(f_{*})) \\
 
\mathbb{E}(f_{*}) = \mu_{X_{*}}+K_{X_{*},X}[K_{X,X}+\sigma^{2}I]^{-1}(y-\mu_{x}) \\
 
\text{cov}(f_{*})=K_{X_{*},X_{*}}-K_{X_{*},X}[K_{X,X}+\sigma^{2}I]^{-1}K_{X,X_{*}}

Explanation

Gaussian process assumes that the output of N function are not independent but correlated. It assumes the collection of N function values, represented by N-dimensional vector f, has a joint Gaussian distribution with mean vector and covariance matrix(kernel matrix). The predicted value of n^{*} test values are given by mean and variance as \mathbb{E}(f_{*}) and \text{cov}(f_{*}) respectively. See below link Deep Kernel Learning for more details.

Comments

Write Your Comment

Upload Pictures and Videos