Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to obtain the Hessian matrix after optimization? #284

Open
Landau1908 opened this issue Dec 6, 2024 · 1 comment
Open

How to obtain the Hessian matrix after optimization? #284

Landau1908 opened this issue Dec 6, 2024 · 1 comment

Comments

@Landau1908
Copy link

Hi,

Is there a way to propagate uncertainties from parameters(ai) to observables(xj), like as the Hessian matrix does?
In the Hessian approach, the uncertainty on the observable is determined in terms of the uncertainties in parameter space.
The uncertainty on the observable gives the confidence level (C.L.).
In CMA, how to get the Hessian matrix or C.L.?

Regards

@nikohansen
Copy link
Contributor

nikohansen commented Dec 6, 2024

The sample covariance matrix is an estimator of the inverse Hessian up to a scalar factor. If isinstance(es, cma.CMAEvolutionStrategy) and es.sigma_vec.is_identity and es.gp.isidentity then es.C is the sample covariance matrix up to a scalar factor.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants