Originally posted by: EdKlotz
Fabian,
Can you post a barrier iteration log with the barrier display parameter set to 2 to this thread? Also, can you display the solution quality after the run stops? And finally, if this is an LP or QP rather than a QCP or SOCP, please try running the simplex method (both primal and dual) on the model just to see what happens. It may shed some light on what is happening.
For LPs, the barrier method uses a convergence criteria that is based on the following measure all residing within the barrier convergence tolerance (called eps below):
-
Normalized primal feasibility measure || b - Ax || / ||b|| <= eps
-
Normalized dual feasibility measure || c - A'y - s || / ||c|| <= eps
-
Normalized strong duality measure || c'x - b'y|| / ||c'x|| <= eps
If some particular aspect of your data is large relative to the rest of the model, that can cause trouble achieving convergence. For example, if your model has huge right hand side values, the barrier algorithm may have trouble reducing the normalized primal feasibility measure below the convergence tolerance because round-off error in the calculation of the above quantities is significantly larger than the convergence tolerance. Thus, the measure just fluctuates above the convergence tolerance as different levels of round-off error creep into the calculations, and eventually CPLEX stops after seeing no progress after a significant number of iterations. In such cases, you either want to rescale the model to reduce the size of the large problem data values, or, if that is not possible, consider using a larger convergence tolerance that exceeds any potential round-off error associated with the data values.
Ed
#CPLEXOptimizers#DecisionOptimization