Originally posted by: babrooks
I have a question about CPLEX memory usage. I read here(https://www-304.ibm.com/support/docview.wss?uid=swg21399933) that CPLEX memory usage in mb on LPs should be approximately proportional to the number of constraints/1000. However, I'm finding that the memory usage grows much faster than the number of constraints.
In my OPL code, I define the model for integer values of parameters nbids and nvals. I am just counting the number of constraints, and it's nbids^2*(nvals+1)*(2nvals+1)/nvals+nbids*(nvals+1)+nvals^2. When I try to run the code, it uses about 2.8gb with nbids=300 and nvals=2, but when I increase nbids to 500, it uses >50gb (the process gets killed by the server). The number of constraints only increases by about 2.8 times. The number of variables grows at about the same rate.
I'm using the primal dual barrier algorithm, and OPL-CPLEX is run using a C++ driver. The driver just loads the OPL model, solves using barrier, and writes the output to an ASCII file. The driver is run on a single node of a Beowulf cluster with 12 cores, 4gb/core. Have tried setting the
Any ideas for why the memory usage is growing so fast? Is there a better approximation for the amount of memory used by CPLEX?
Thanks.
#CPLEXOptimizers#DecisionOptimization