-
-
Notifications
You must be signed in to change notification settings - Fork 396
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Debuggability of JuMP models #3034
Comments
Finding infeasible constraints from the solver output using |
It's not that special and the code is ugly but I wrote this function to easily print out conflicting constraints function conflict_cons(model)
compute_conflict!(model)
error_found = false
conflict_constraint_list = ConstraintRef[]
for (F, S) in list_of_constraint_types(model)
for con in all_constraints(model, F, S)
try
if MOI.get(model, MOI.ConstraintConflictStatus(), con) == MOI.IN_CONFLICT
push!(conflict_constraint_list, con)
println(con)
end
catch
con_type = MOI.get(model, MOI.ConstraintConflictStatus(), con)
if ~error_found
@info "Only one error will be printed"
error("error found with " * string(F) * string(S) * string(con) * con_type)
error_found = true
end
end
end
end
end
|
I use this: """
print_conflict!(model)
Compute and print a conflict for an infeasible `model`.
"""
function print_conflict!(model)
JuMP.compute_conflict!(model)
ctypes = list_of_constraint_types(model)
for (F, S) in ctypes
cons = all_constraints(model, F, S)
for i in eachindex(cons)
isassigned(cons, i) || continue
con = cons[i]
cst = MOI.get(model, MOI.ConstraintConflictStatus(), con)
cst == MOI.IN_CONFLICT && @info name(con) con
end
end
return nothing
end This only prints the conflict (I usually have only a dozen constraints tops), though it could be modified easily to return the list of constraints. Obviously, printing is not very useful if constraints / variables have no name... |
It's interesting that we both wrote code to do approximately the same thing:
I often have 1000s of constraints and my function mostly works fine (except where computing the conflict takes excessive time but this is normally a sign my model is a mess) although it could certainly be improved. I also tend to name all variables and constraints as this helps a lot with debugging. In addition a lot of the time I have a 'core' model which represents the core physical properties I care about, I then call this method for each experiment and add the things relevant for my current experiment. I find this helps as it gives me faith that I have a valid starting point. But this methodology is already described in the JuMP docs. function create_model()
model=Model()
@variable(model, a)
@constraint(model, capacity, a ≤ 10)
return model
end
experiment1 = create_model()
@constraint(experiment1, ...)
@objective(experiment1, ...)
optimize!(experiment1)
experiment2 = create_model()
@constraint(experiment2, ...)
@objective(experiment2, ...)
optimize!(experiment2) |
A relatively simple to implement strategy for debugging infeasibility is to do a sort of bisection search on a list of constraint names by turning off half, say, at a time and testing the remainder. |
This is what I have done for many years, especially for classes of models where no solvers support |
There's the julia> using JuMP, Gurobi
julia> model = Model(Gurobi.Optimizer);
julia> set_silent(model)
julia> @variable(model, 0.1 <= x <= 0.9, Int)
x
julia> @variable(model, y <= 0)
y
julia> @constraint(model, x <= y)
x - y ≤ 0.0
julia> optimize!(model)
julia> compute_conflict!(model)
julia> conflict, index_map = copy_conflict(model);
julia> print(conflict)
Feasibility
Subject to
x ≥ 0.1
x ≤ 0.9
x integer PR to improve docs here: #3039 |
Another thing we've been using in a project to verify solutions at small scales is just to have an enumeration function for the integer variables (have to be bounded): function min_via_enum(f, n, values = fill(0:1,n))
solutions = Iterators.product(values...)
best_val = Inf
best_sol = nothing
for sol in solutions
sol_vec = collect(sol)
val = f(sol_vec)
if best_val > val
best_val = val
best_sol = sol_vec
end
end
return best_val, best_sol
end in the mixed-integer case, |
I have a start here: https://jump.dev/JuMP.jl/previews/PR3043/tutorials/getting_started/debugging/ Comments appreciated #3043 |
Here is another reference for prior art, Math Program Inspector. |
Thanks @ccoffrin : the bibliography in your link gives this nice reference to Bruce McCarl's paper here: https://ageconsearch.umn.edu/record/15553/files/30020403.pdf |
I've merged a tutorial: https://jump.dev/JuMP.jl/dev/tutorials/getting_started/debugging/ From discussion with @ccoffrin, one thing we could add is a modeling suggestion to structure models so that they have complete recourse (that is, they never have an infeasible solution). |
Proposed feasibility relaxation tool is here: jump-dev/MathOptInterface.jl#1995 we'd just need some plumbing on the JuMP side to tidy it up. |
|
One area where I think we can do much more to help users is to provide tooling and documentation to help them debug and test the optimization models that they build.
I had this as one of the items in #2348, but I'm lifting it to a stand-alone issue.
I think what this needs is a new tutorial that goes over some common issues, and provides links to other resources.
If anyone has ideas/links, please post them below.
Motivation
@mtanneau's JuMP-dev talk discussed a lot of this, but it keeps coming up. I see a lot of common issues on Discourse where people are missing some strategies to debug things.
Prior art
It's also a topic that is surprisingly lacking in other modeling libraries.
check
s to their code to validate data, and that "you may have to spend some time reviewing your formulation before you discover what is wrong." The tooling suggestion is to print out a constraint or objective to look at the data. But that doesn't scale to large problems very well.feasrelax
andIIS
support: https://www.gurobi.com/documentation/9.5/examples/diagnose_and_cope_with_inf.htmlI found this excellent paper that should be required reading for anyone who encounters a bug modelling:
It does have one great paragraph: "Just like science, debugging cannot be a cold, calculating and linear process. Both involve essential elements of inspired guesswork, hutches and sudden flashes of insight which cut through the mist."
They lay out a good framework for debugging as a table/flow-chart. We could aim to replicate that.
Other links
Documentation: debugging Julia code in general
Give examples of how different common Julia errors can arise (BoundsError, MethodError, ...)
Other resources
Documentation: debugging solver issues
If solver returns something unexpected, 99% of the time there is a bug in your code
Solver returns something unexpected.
Other resources
Erwin has a good tactic for debugging unbounded problems: http://yetanothermathprogrammingconsultant.blogspot.com/2018/08/the-best-way-to-debug-infeasible-models.html
So
becomes
Documentation: unit testing
@test
to ensureprimal_feasibility_report
to validate (in)feasible solutionsTooling: feasibility relaxation
Some solvers have native support, but we should be able to write this as a function in JuMP.
Tooling: how to prove your formulation is correct
I've never really seen a good answer for this, and this just stung me. My model was:
So we need some way of easily validating inputs and outputs. We have
primal_feasibility_report
, so perhaps we could have some unit testing framework that let you provide sets of variable values to assert that they are (in)feasible.@mtanneau pointed out that small test cases in unit tests can be brittle if there are multiple optimal solutions, you're using a local solver, or you're timing out early.
The text was updated successfully, but these errors were encountered: