Originally posted by: M6BN_Stefano_Gliozzi
Laszlo,
Many many thanks. It is very important for us to know that, CPLEX (I assume Concert is considered part of), does not benefit so much of the specific-to-P8 compiler optimizations.
And is of paramount importance for me and my fellow architect, to have a look in the system parameters and study the paper you point out. We will study.
May I cannibalize a bit more your competences ? let me explain what happens in the application (right,we will have to manage cohexistence with other pieces).
Every night we need to solve some (independent from each other) stochastic MILP with in our formulation are roughly 2 to 10 Million variables and 1 to 5 million constraints. (the number of general integer range form about 300 to about 50'000.
We do this in 12 fully parallel processes who get in turn a payload, consisting of a set of models who share part of the input (this to take advantage of some economies on the DB Server queries, which alone are roughly 40% of the overall elapsed).
I know that these models are really easy to solve, generally speaking. They usually solve at integer optimum in 0 or less than 10 nodes. taking a few seconds for the presolve and LP phase
I also know that they reduce usually to less than 100'000 columns and rows after presolve. This is due to our (lazy ? maintainable ?) strategy in building the model. We generate several constraint/variables that willcertainly---or after a first analysis---will be 0 (the variables) or redundant (the constraints).
We adopted this development schema, since it was much easier to build (and even more to test) this kind of model in concert, than generating it in C++ with CPLEX direct call, doing ourselves a first level of presolving.
I suspect that now, on top of the presolve overhead that we can measure, probably we have too much overhead in building the concert model. Do you see any tip specific for the concert part ? Should we plan to re-do it in plain C++ / CPLEX ?
#CPLEXOptimizers#DecisionOptimization