-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
config type specific "target" error (or really relative?) #15
Comments
Google „iteratively reweighting leaset squares“ . Not difficultb8t expensive. We may wish to combine this with some other ideas... |
Here is a code I wrote for me students a little while ago. It should make the idea clear. function wlsq(A, y, w)
W = Diagonal(sqrt.(w))
return qr(W * A) \ (W * y)
end
function irlsq(A, y; tol=1e-5, maxnit = 100, γ = 1.0, γmin = 1e-6, verbose=true)
M, N = size(A)
@assert M == length(y)
wold = w = ones(M) / M
res = 1e300
x = zeros(N)
verbose && @printf(" n | ||f-p||_inf | extrema(w) \n")
verbose && @printf("------|-------------|---------------------\n")
for nit = 1:maxnit
x = wlsq(A, y, w)
resnew = norm(y - A * x, Inf)
verbose && @printf(" %4d | %.2e | %.2e %.2e \n", nit, resnew, extrema(w)...)
# update
wold = w
res = resnew
wnew = w .* (abs.(y - A * x).^γ .+ 1e-15)
wnew /= sum(wnew)
w = wnew
end
return x, w, res
end |
What I'd consider is combining this algorithm with the iterative LSQ solver that Vincent is currently developing for us. We would do the reweighing by only using a low-accuracy solution of the LSQ system, and then once the weights have more or less converged we then freeze them and do the high-accuracy LSQ solve via QR decomposition. At least that's my intuition. |
Regarding absolute vs relative errors. Yes, of course you are right. But you'd still need to give a single number for each observation / category. You would then use this to specify a weighted max-norm. But this is again equivalent to just choosing weights on each observation - reweight the lsq system but then minimise the max-norm instead of the RMSE. So there will be two types of weights: one to specify the relative importance of the observation and secondly the one that the algorithm produces so that the L2-minimisation gives you effectively an Linf-minimisation. If this is all very unclear then let's have a meeting next week and discuss it. |
Another idea:
|
Gabor mentioned looking at fitting but with config type specific target errors in mind. How would we go about this? Performing a series of sequential (regularised) fits and taking "steps" in certain directions (some sort of optimisation problem)?
I guess we can only really do this with relative errors in mind? Such as:
liquid:diamond
=>1:4
@cortner @gabor1
The text was updated successfully, but these errors were encountered: