-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement bound constraints (e.g. with ADMM or Dog Leg) #3
Comments
An alternative could be to implement a Dog Leg type optimisation routines: Quick and dirty python implementation def dogbox_nnlsqr(A,b):
# A Rectangular Trust Region Dogleg Approach for
# Unconstrained and Bound Constrained Nonlinear Optimization
Aop = scipy.sparse.linalg.aslinearoperator(A)
def isfeasible(xx, xlb, xub):
return np.all([xx>=xlb, xx<=xub])
x = np.zeros(A.shape[1])
#passive_set = np.ones(A.shape[1], dtype=bool)
delta = np.inf
tol = 1e-5
max_iter = 50
iter = 0
while (iter<max_iter ):
iter += 1
lb = np.fmax(-x,-delta)
ub = delta*np.ones(A.shape[1])
b_centered = b-Aop.matvec(x)
mg = Aop.rmatvec(b_centered)
print 'iter', iter
#print 'lb', lb
#print 'ub', ub
#print 'mg', mg
print 'x', x
passive_set = ~( ( (x<=lb) & (mg<0) ) | ( (x>=ub) & (mg>0) ) )
print 'passive_set',passive_set
def xreduce(xx):
return xx[passive_set]
def xexpand(xred):
xe = np.zeros(A.shape[1])
xe[passive_set] = xred
return xe
Ared = scipy.sparse.linalg.LinearOperator(
np.array([A.shape[0], np.count_nonzero(passive_set)]),
matvec = lambda y: Aop.matvec(xexpand(y)),
rmatvec = lambda y: xreduce(Aop.rmatvec(y)),
)
xrold = xreduce(x)
br_centered = b-Ared.matvec(xrold)
xr_pack = scipy.sparse.linalg.lsmr(Ared,br_centered)
xr_newton = xr_pack[0];
lbr = xreduce(lb)
ubr = xreduce(ub)
if isfeasible(xr_newton, lbr, ubr):
sr = xr_newton
else:
mgr = Ared.rmatvec(b)
gTgr = np.dot(mgr,mgr)
mAgr = Ared.matvec(mgr)
gTATAgr = np.dot(mAgr,mAgr)
xr_cauchy = (gTgr/gTATAgr)*mgr
if isfeasible(xr_cauchy, lbr, ubr):
NnC = xr_newton-xr_cauchy;
idx = (NnC>0)
alphau = np.min( (ubr[idx] - xr_cauchy[idx]) / NnC[idx] )
alphal = np.min( (lbr[~idx] - xr_cauchy[~idx]) / NnC[~idx] )
print 'alpha', alphau, alphal
sr = xr_cauchy + np.fmin(alphau,alphal)*NnC
else:
idx = (xr_cauchy>0)
betau = np.min( ubr[idx] / xr_cauchy[idx] )
betal = np.min( lbr[~idx] / xr_cauchy[~idx] )
print 'beta', betau, betal
PC = np.fmin(betau,betal)*xr_cauchy
NnPC = xr_newton-PC;
idx = (NnPC>0)
alphau = np.min( (ubr[idx] - PC[idx]) / NnPC[idx] )
alphal = np.min( lbr[~idx] - PC[~idx] / NnPC[~idx] )
print 'alpha', alphau, alphal
sr = PC + np.fmin(alphau,alphal)*NnPC
if ( np.dot(sr,sr)<tol ):
break
x = x + xexpand(sr)
return x It would also be nice to compare it to the approach of BCLS: |
Implementing bound constraints with ADMM should be relatively easy:
http://stanford.edu/~eryu/nnlsqr.html
https://web.stanford.edu/~boyd/papers/pdf/admm_slides.pdf (Slide 27)
Quick and dirty python implementation:
The text was updated successfully, but these errors were encountered: