Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dealing with disjunction of linear constraints #67

Open
tomerarnon opened this issue Jun 14, 2019 · 2 comments
Open

Dealing with disjunction of linear constraints #67

tomerarnon opened this issue Jun 14, 2019 · 2 comments
Labels
enhancement New feature or request

Comments

@tomerarnon
Copy link
Collaborator

tomerarnon commented Jun 14, 2019

Optimization based methods (those that take complements) work well for "exclusion zone" type properties but not as well when the property is f(X) \in Y for convex Y, because the complement leads to a disjunction of linear constraints.
To solve this type of problem one needs to solve N optimization problems (the number of constraints) and show that all are infeasible. At the moment the only place this is described is the new CARS workshop ACAS notebook. Also, we can make the process of solving all N standard operating procedure for these types of problems.

@tomerarnon tomerarnon added the enhancement New feature or request label Jun 14, 2019
@chelseas
Copy link
Collaborator

chelseas commented Jan 31, 2020

You can represent this disjunction of N linear constraints using a ReLU or max piecewise linear disjunction. E.g. if you want x in [a,b] and then you need to certify that x < a | x > b is unsatisfiable, you can create new variables y = a - x and z = x - b. Both y and z should be <=0 if the problem is "in bounds" so the algo should search for solutions where either is >0, which you can write as max(y, z) > 0. This can be nested to represent N disjunctions. BUT, superlevel sets of a convex function are not necessarily convex.
SO, solving N problems with only halfplane constraints sounds like it may result in N convex problems, which are far easier to solve, so that may be faster.

@tomerarnon
Copy link
Collaborator Author

tomerarnon commented Aug 11, 2020

For the record, since this came up again, the initial proposal was to implement essentially:

# For solvers which can only handle HalfSpace output constraints like Reluplex.
# Rather than pass things in piecemeal as below, this operation can happen automatically
# any time problem.output is an AbstractPolytope that isn't a HalfSpace.
function convex_nv(solver, net, input, convex_output)
    res = nothing
    for hs in constraints_list(convex_output)
        prob = Problem(net, input, hs)
        res = solve(solver, prob)
        res.status === :violated && return res
    end
    res
end

Today we noted that instead of the above, we can likely do

encode_entire_problem_except_output(model, problem)
for hs in constraints_list(convex_output)
    new_constraints = add_set_constraint!(model, hs, z[end])
    # rest of the algorithm...
    result.status == :violated && return result
    # at the end:
    delete.(model, new_constraints)
end

for most algorithms that would be affected by this. We don't have to reencode the problem each time this way, and may also be able to take advantage of a warm-start in some cases.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants