Dear Shanaka.
You can get petter performances by upgrading to 20.1:
New value for parameter Workers: 1
New value for parameter SearchType: Restart
! --------------------------------------------------- CP Optimizer 20.1.0.0 --
! Maximization problem - 10 variables, 1 constraint
! Presolve : 10001 extractables eliminated
! Workers = 1
! SearchType = Restart
! Initial process time : 0.49s (0.47s extraction + 0.02s propagation)
! . Log search space : 10.0 (before), 10.0 (after)
! . Memory usage : 18.8 MB (before), 18.8 MB (after)
! Using sequential search.
! ----------------------------------------------------------------------------
! Best Branches Non-fixed Branch decision
0 10 -
+ New bound is 2279.829
* 3.511515e-09 16 0.55s (gap > 10000%)
* 1674.194 18 0.58s (gap is 36.17%)
* 1679.711 29 0.63s (gap is 35.73%)
* 1680.039 38 0.67s (gap is 35.70%)
* 1681.108 50 0.71s (gap is 35.61%)
! ----------------------------------------------------------------------------
! Search completed, 5 solutions found.
! Best objective : 1681.108 (optimal - effective tol. is 0.168111)
! Best bound : 2279.829
! ----------------------------------------------------------------------------
! Number of branches : 138
! Number of fails : 62
! Total memory usage : 101.9 MB (101.8 MB CP Optimizer + 0.0 MB Concert)
! Time spent in solve : 1.13s (0.66s engine + 0.47s extraction)
! Search speed (br. / s) : 212.3
! ----------------------------------------------------------------------------
Optimal solution found with objective 1681.11.
I hope this helps.
------------------------------
Renaud Dumeur
------------------------------
Original Message:
Sent: Mon September 27, 2021 03:49 AM
From: SHanaka Perera
Subject: docplex.cp.model is slower than the exhaustive search
Dear Renaud,
Yes you are right, when i used SearchType='DepthFirst'
and Workers=1
the run time was reduced to about 3 seconds but still slower than the exhaustive search algorithm I have written.
Can you please tell as per my example or with some other example, what dimensions would show improvements on the run time on CP optimiser?
Thanks,
Shanaka
------------------------------
SHanaka Perera
Original Message:
Sent: Mon September 27, 2021 03:36 AM
From: Renaud Dumeur
Subject: docplex.cp.model is slower than the exhaustive search
Dear Shanaka,
On such a small problem, you can try use the 'SearchType=DepthFirst" parameter setting that will force CPO to only perform exhaustive search and Workers=1 which will limit the number of cores used during search. It should reduce the solve time!
I hope this helps,
Cheers,
------------------------------
Renaud Dumeur
Original Message:
Sent: Sun September 26, 2021 03:44 PM
From: SHanaka Perera
Subject: docplex.cp.model is slower than the exhaustive search
I am working on a combinatorial optimisation
problem and realised that the CP Optimizer
is taking a significant time to run. Here is a toy example:
I am using the `python` API for `docplex`
import numpy as np
from docplex.cp.model import CpoModel
N = 5000
S = 10
k = 2
u_i = np.random.rand(N)[:,np.newaxis]
u_ij = np.random.rand(N*S).reshape(N, S)
beta = np.random.rand(N)[:,np.newaxis]
m = CpoModel(name = 'model')
R = range(0, S)
idx = [(j) for j in R]
I = m.binary_var_dict(idx)
m.add_constraint(m.sum(I[j] for j in R)<= k)
total_rev = m.sum(beta[i,0] / ( 1 + u_i[i,0]/sum(I[j] * u_ij[i,j] for j in R) ) for i in range(N) )
m.maximize(total_rev)
sol=m.solve(agent='local')
sol.print_solution()
for i in R:
if sol[I[i]]==1:
print('i : '+str(i))
Part of the output is as follows:
Model constraints: 1, variables: integer: 10, interval: 0, sequence: 0
Solve status: Optimal
Search status: SearchCompleted, stop cause: SearchHasNotBeenStopped
Solve time: 76.14 sec
-------------------------------------------------------------------------------
Objective values: (1665.58,), bounds: (1665.74,), gaps: (9.27007e-05,)
Variables:
+ 10 anonymous variables
The same I tried with an exhaustive search:
import numpy as np
import pandas as pd
from itertools import combinations,permutations,product
import time
start = time.time()
results = []
for K_i in range(1,k+1): #K
comb = list(combinations(range(S), K_i))
A = len(comb)
for a in range(A):# A
comb_i = comb[a]
I = np.repeat(0,S).reshape(-1,1)
I[comb_i,0] = 1
u_j = np.matmul(u_ij,I)
total_rev = np.sum(beta/ (1 + u_i/u_j))
results.append({'comb_i':comb_i, 'total_rev':total_rev })
end = time.time()
time_elapsed = end - start
print('time_elapsed : ', str(time_elapsed))
results = pd.DataFrame(results)
opt_results = results[results['total_rev'] == max(results['total_rev'].values)]
print(opt_results)
Output:
time_elapsed : 0.012971639633178711
comb_i total_rev
23 (1, 6) 1665.581329
As you can see the CP Optimizer
is 1000 times slower than the exhaustive search. Is there a way to improve the CP Optimizer
algorithm?
------------------------------
SHanaka Perera
------------------------------
#DecisionOptimization