Dear Leo,
Although the CP version of your program doesn't return from solve(), our tests shows display intermediate solutions are displayed and the solver continues its work as it is seeking to prove optimality. So for comparing the CP and MP performances you could choose to set a time limit to CP.
The current CP model gets stuck at around 0.04 after around 20s, But you can improve its performance by specifying a search phase that will reduce the number of decision variables for the problem.
Simply adding
m.add(m.search_phase(sets_present))
will improve CPO results (the objective raises up to around 0.097).
Also, I note that the randon seed of the random number generator isn't initialized which means you solve a different problem each time. I've done my experiments by setting the random seed to 0.
It would be interesting for us to know what version of COS you are using.
I hope this helps.
Edit: In addition it would be interesting to test the scheduling model since it'd rely on a temporal relaxation similar to what the cplex model does.
------------------------------
Renaud Dumeur
------------------------------
Original Message:
Sent: Sun May 02, 2021 11:36 AM
From: Leo Singer
Subject: Max weighted coverage problem: CP versus MILP formulation, CP Optimizer much slower?
How do you efficiently express a max weighted set coverage problem in CP Optimizer? The conventional MILP formulation is solved very rapidly by CPLEX (via docplex.mp), but the same formulation in CP Optimizer (via docplex.cp) takes seemingly forever to solve.
The reason that I am trying to do a max weighted set coverage problem in CP Optimizer is that this is a toy version of my real problem contains scheduling with optional intervals, with the presence of each interval being identified with the presence of a corresponding set.
Imports, problem setup
from docplex.cp import model as cpfrom docplex.mp import model as mpimport numpy as npn_elements = 10000n_sets = 1000n_elements_per_set = 100max_sets_present = 10# Random weights for each elementweights = np.random.uniform(size=n_elements)weights /= weights.sum()# Random assignments of elements to setsassigns = np.asarray([ np.random.choice(n_elements, n_elements_per_set, replace=False) for _ in range(n_sets)])assigns_inverse = [np.nonzero(assigns == i)[0] for i in range(n_elements)]
MILP version
with mp.Model() as m: elems_present = m.binary_var_list(n_elements) sets_present = m.binary_var_list(n_sets) m.add_(m.sum(sets_present[j] for j in assign_inverse) >= elem_present for assign_inverse, elem_present in zip(assigns_inverse, elems_present)) m.add_(m.sum(sets_present) <= max_sets_present) m.maximize(m.scal_prod(elems_present, weights)) m.print_information() %time solution = m.solve()objective_value = solution.get_objective_value()print('objective_value =', objective_value)
Output:
Model: docplex_model1
- number of variables: 11000
- binary=11000, integer=0, continuous=0
- number of constraints: 10001
- linear=10001
- parameters: defaults
- objective: maximize
- problem type is: MILP
CPU times: user 1min 23s, sys: 1.69 s, total: 1min 25s
Wall time: 25.6 s
objective_value = 0.11137026929164126
CP version
with cp.CpoModel() as m: elems_present = m.binary_var_list(n_elements) sets_present = m.binary_var_list(n_sets) m.add(m.sum(sets_present[j] for j in assign_inverse) >= elem_present for assign_inverse, elem_present in zip(assigns_inverse, elems_present)) m.add(m.sum(sets_present) <= max_sets_present) m.maximize(m.scal_prod(elems_present, weights)) m.write_information() %time solution = m.solve(LogVerbosity='Verbose', Workers='Auto')objective_value, = solution.get_objective_values()print('objective_value =', objective_value)
Output:
Model: <ipython-input-4-9b2e732a693a>
- source file: <ipython-input-4-9b2e732a693a>
- modeling time: 0.41 sec
- number of integer variables: 11000
- number of interval variables: 0
- number of expressions: 10003
- number of expression nodes: 41008
- operations: greaterOrEqual: 10000, lessOrEqual: 1, maximize: 1, scalProd: 1, sum: 10001
(...never terminates)
------------------------------
Leo Singer
------------------------------
#DecisionOptimization