Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GP classification description #244

Open
benmarlin opened this issue Sep 16, 2016 · 1 comment
Open

GP classification description #244

benmarlin opened this issue Sep 16, 2016 · 1 comment

Comments

@benmarlin
Copy link

The description of GP classification here: http://edwardlib.org/tut_gp_classification seems to be incorrect. The description looks like Bayesian Logistic regression where the prior covariance matrix on the weights has been replaced with the Gram matrix of a GP covariance function. The dimensionality of z implied by p(z)​​​=N(z;0,K)​​ with K the Gram matrix is the number of data cases. x_n is stated to have dimension D. In that case, the inner product x​_n​'​​z in the inverse logit has a dimension mismatch. Compare to equation 3.10 in Rasmussen and Williams 2006 for their presentation of the posterior predictive distribution. In your notation, you probably want something like p(y​_n​​|x​_n,f​​)​​​=Bernoulli(y​n​​ | logit​inv​​(f(x​_n)​)) with f the draw from GP, or p(y​_n​​|x​_n,z_n)​​​=Bernoulli(y​n​​ | logit​inv​​(z​_n)​) with z_n = f(x​_n).

@dustinvtran
Copy link
Member

dustinvtran commented Sep 16, 2016

Good catch! Yes you're right, the prior is described incorrectly, with the most obvious reason being that there's a dimension mismatch in the equation.

If you have time to make edits, would greatly appreciate it and we can merge those changes into the website.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants