Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

config type specific "target" error (or really relative?) #15

Open
casv2 opened this issue Jul 23, 2020 · 5 comments
Open

config type specific "target" error (or really relative?) #15

casv2 opened this issue Jul 23, 2020 · 5 comments

Comments

@casv2
Copy link
Collaborator

casv2 commented Jul 23, 2020

Gabor mentioned looking at fitting but with config type specific target errors in mind. How would we go about this? Performing a series of sequential (regularised) fits and taking "steps" in certain directions (some sort of optimisation problem)?

I guess we can only really do this with relative errors in mind? Such as: liquid:diamond => 1:4

@cortner @gabor1

@cortner
Copy link
Member

cortner commented Jul 23, 2020

Google „iteratively reweighting leaset squares“ . Not difficultb8t expensive. We may wish to combine this with some other ideas...

@cortner
Copy link
Member

cortner commented Jul 23, 2020

Here is a code I wrote for me students a little while ago. It should make the idea clear.

function wlsq(A, y, w) 
    W = Diagonal(sqrt.(w))
    return qr(W * A) \ (W * y)
end 

function irlsq(A, y; tol=1e-5, maxnit = 100, γ = 1.0, γmin = 1e-6, verbose=true)
    M, N = size(A)
    @assert M == length(y)
    wold = w = ones(M) / M
    res = 1e300
    x = zeros(N)
    verbose  && @printf("  n   | ||f-p||_inf |  extrema(w) \n")
    verbose  && @printf("------|-------------|---------------------\n")
    for nit = 1:maxnit 
        x = wlsq(A, y, w)
        
        resnew = norm(y - A * x, Inf)
        verbose  && @printf(" %4d |   %.2e  |  %.2e  %.2e \n", nit, resnew, extrema(w)...)

        # update
        wold = w
        res = resnew
        wnew = w .* (abs.(y - A * x).^γ .+ 1e-15)
        wnew /= sum(wnew)
        w = wnew 
    end
    return x, w, res 
end

@cortner
Copy link
Member

cortner commented Jul 23, 2020

What I'd consider is combining this algorithm with the iterative LSQ solver that Vincent is currently developing for us. We would do the reweighing by only using a low-accuracy solution of the LSQ system, and then once the weights have more or less converged we then freeze them and do the high-accuracy LSQ solve via QR decomposition.

At least that's my intuition.

@cortner
Copy link
Member

cortner commented Jul 23, 2020

Regarding absolute vs relative errors. Yes, of course you are right. But you'd still need to give a single number for each observation / category. You would then use this to specify a weighted max-norm. But this is again equivalent to just choosing weights on each observation - reweight the lsq system but then minimise the max-norm instead of the RMSE.

So there will be two types of weights: one to specify the relative importance of the observation and secondly the one that the algorithm produces so that the L2-minimisation gives you effectively an Linf-minimisation.

If this is all very unclear then let's have a meeting next week and discuss it.

@cortner
Copy link
Member

cortner commented Jul 23, 2020

Another idea:

  • Initialise weights, Solve QR factorisation
  • Reweight;
  • Use current QR-factoristion to precondition LSQR
  • iterate. and every so often update the QR factorisation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants