Tutorial
using CoolPDLP
using HiGHS: HiGHS
using JLArrays
using JuMP: JuMP, MOI
using MathOptBenchmarkInstances: Netlib, list_instances, read_instanceCreating a MILP
You can use QPSReader.jl to read a MILP from a local MPS file, or MathOptBenchmarkInstances.jl to automatically download standard benchmark sets (which we do here).
dataset = Netlib
list = list_instances(dataset)
name = list[4]
qps, path = read_instance(dataset, name);A MILP object can be constructed from there:
milp = MILP(qps; dataset, name, path)MILP instance afiro from dataset Netlib:
- types:
- values Float64
- vectors Vector{Float64}
- matrices SparseArrays.SparseMatrixCSC{Float64, Int64}
- variables: 32
- 32 continuous
- 0 integer
- constraints: 27
- 19 inequalities
- 8 equalities
- nonzeros: 83Its attributes can be queried:
nbvar(milp)32nbcons(milp)27Note that manual construction is also an option if you provide the constraints, variable bounds and objectives as arrays.
Solving a MILP
You can use the PDLP algortithm to solve the continuous relaxation of a MILP. The first thing to do is define parameters inside a PDLP struct.
algo = PDLP(;
termination_reltol = 1.0e-6,
time_limit = 10.0,
)PDLP algorithm:
- ConversionParameters: types=(Float64, Int64, SparseArrays.SparseMatrixCSC), backend=KernelAbstractions.CPU(false)
- PreconditioningParameters: chambolle_pock_alpha=1.0, ruiz_iter=10
- StepSizeParameters: invnorm_scaling=0.9, primal_weight_damping=0.5, zero_tol=1.0e-8
- RestartParameters: sufficient_decay=0.2, necessary_decay=0.8, artificial_decay=0.36
- GenericParameters: show_progress=false, check_every=100, record_error_history=true
- TerminationParameters: termination_reltol=1.0e-6, max_kkt_passes=100000, time_limit=10.0Then all it takes is to call solve.
sol, stats = solve(milp, algo);The solution is available as a PrimalDualSolution:
sol.x32-element Vector{Float64}:
79.99998483892944
25.499948968432776
54.50001175963277
84.79998144856992
59.021542483628394
0.0
0.0
0.0
0.0
0.0
⋮
0.0
0.0
0.0
0.0
0.0
339.94294699076795
236.51443477946376
63.394278500975204
0.0The stats contain information about the convergence of the algorithm:
statsConvergence stats with termination status OPTIMAL:
- KKT relative errors: primal 9.909e-08, dual 3.936e-08, gap 3.756e-07
- time elapsed: 1.041 seconds
- KKT passes: 900You can check the feasibility and objective value:
is_feasible(sol.x, milp; cons_tol = 1.0e-4)trueobjective_value(sol.x, milp)-464.75321283671656Running on the GPU
To run the same algorithm on the GPU, all it takes is to define a different set of parameters and thus force conversion of the instance:
algo_gpu = PDLP(
Float32, # desired float type
Int32, # desired int type
GPUSparseMatrixCSR; # GPU sparse matrix type, replace with e.g. CuSparseMatrixCSR
backend = JLBackend(), # replace with e.g. CUDABackend()
termination_reltol = 1.0f-6,
time_limit = 10.0,
)PDLP algorithm:
- ConversionParameters: types=(Float32, Int32, GPUSparseMatrixCSR), backend=JLArrays.JLBackend(false)
- PreconditioningParameters: chambolle_pock_alpha=1.0, ruiz_iter=10
- StepSizeParameters: invnorm_scaling=0.9, primal_weight_damping=0.5, zero_tol=1.0e-8
- RestartParameters: sufficient_decay=0.2, necessary_decay=0.8, artificial_decay=0.36
- GenericParameters: show_progress=false, check_every=100, record_error_history=true
- TerminationParameters: termination_reltol=1.0e-6, max_kkt_passes=100000, time_limit=10.0The result of the algorithm will live on the GPU:
sol_gpu, stats_gpu = solve(milp, algo_gpu)
sol_gpu.x32-element JLArrays.JLArray{Float32, 1}:
80.00002
25.500057
54.499992
84.800026
59.021748
0.0
0.0
0.0
0.0
0.0
⋮
0.0
0.0
0.0
0.0
0.0
339.94275
236.51427
63.394226
0.0To bring in back to the CPU, just call the Array converter.
objective_value(Array(sol_gpu.x), milp)-464.7530648803711Using the JuMP interface
If you have a model available in JuMP.jl, you can simply choose CoolPDLP.Optimizer as your solver, and pass it the same options as before.
model = JuMP.read_from_file(path; format = MOI.FileFormats.FORMAT_MPS)
JuMP.set_optimizer(model, CoolPDLP.Optimizer)
JuMP.set_silent(model)
JuMP.set_attribute(model, "termination_reltol", 1.0e-6)
JuMP.set_attribute(model, "matrix_type", GPUSparseMatrixCSR)
JuMP.set_attribute(model, "backend", JLBackend())
JuMP.optimize!(model)
x_jump = JuMP.value.(JuMP.all_variables(model))32-element Vector{Float64}:
79.99998483892942
25.49994896843277
54.50001175963277
84.79998144856994
59.02154248362832
0.0
0.0
0.0
0.0
0.0
⋮
0.0
0.0
0.0
0.0
0.0
339.94294699076806
236.51443477946384
63.39427850097521
0.0objective_value(x_jump, milp)-464.75321283671667Comparing with HiGHS
To check our solution, let's compare it with the output from the HiGHS solver:
model_highs = JuMP.read_from_file(path; format = MOI.FileFormats.FORMAT_MPS)
JuMP.set_optimizer(model_highs, HiGHS.Optimizer)
JuMP.set_silent(model_highs)
JuMP.optimize!(model_highs)
x_ref = JuMP.value.(JuMP.all_variables(model_highs))32-element Vector{Float64}:
80.0
25.5
54.5
84.80000000000001
18.214285714285715
0.0
0.0
0.0
0.0
-0.0
⋮
0.0
0.0
0.0
0.0
0.0
339.9428571428572
383.9428571428572
-0.0
0.0Of course, the solution given by HiGHS is feasible too, and we can compare objective values:
is_feasible(x_ref, milp; cons_tol = 1.0e-4)trueobjective_value(x_ref, milp)-464.7531428571429This page was generated using Literate.jl.