Originally posted by: TobiasAchterberg
Let us review the theory of dualization for LPs that include bounds on variables. First, let's consider only >= constraints and minimization.
The primal LP can be defined as
min cx
s.t. Ax >= b (y)
x >= l (q)
-x >= -u (r)
Then the associated dual LP is
max by + lq - ur
s.t. yA + q - r = c
y >= 0
q >= 0
r >= 0
Suppose that all bounds l,u are finite. Then, for any dual vector y >= 0 you can just calculate the resulting reduced costs q-r = c - yA. If for variable j we have (c-yA)_j >= 0 then we define q_j := (c-yA)_j and r_j := 0. Otherwise we define q_j := 0 and r_j := -(c-yA)_j. Doing so ensures q >= 0 and r >= 0 and consequently we get a dual feasible solution (y,q,r).
If an upper bound u_j is infinite, then we must have r_j = 0. If a lower bound l_j is infinite, then we must have q_j = 0. Thus, in these cases it is no longer the case that any dual vector y >= 0 can be extended to a dual feasible solution (y,q,r).
If a primal constraint is a <= inequality, then we have y_i <= 0 for the corresponding dual variable. If a primal constraint is an equation, then the corresponding dual variable y_i is free.
So, as a summary, when you think about the dual solution you always need to think about both, the dual row multipliers y and the reduced costs (q,r). You cannot ignore the bounds in the primal, and similarly you cannot ignore the reduced costs in the dual.
#CPLEXOptimizers#DecisionOptimization