Decision Optimization

Decision Optimization

Delivers prescriptive analytics capabilities and decision intelligence to improve decision-making.

 View Only
Expand all | Collapse all

Help following Bender's Decomposition example (bendersatsp.py)

Archive User

Archive UserTue November 22, 2011 05:16 PM

  • 1.  Help following Bender's Decomposition example (bendersatsp.py)

    Posted Thu November 10, 2011 05:44 PM

    Originally posted by: Eumpfenbach


    I posted a question earlier, but please ignore it. I have become more familiar with the example and have made progress implementing the callbacks. I realize there may be a thread that contains the answer to my question already but some of them are pretty hard for me to decipher.

    The distributed example has only binary variables in the master problem objective function. I have both continuous and binary variables in the objective of my MILP. I have written code for a Bender's Decomposition where I solve the master fully each time, and add a cut until convergence. The continuous variable I introduce to penalize the master problem objective to account for the costs of continuous variables is giving me trouble.

    Specifically, the problem is maximization. I give the singular continuous variable an objective coefficient of -1. I give it a lower bound that is significantly lower than it's optimal value (say -1 million). When I solve my problem, the results indicate it is solving as if it is ignoring the continuous variables, maximizing only on the binaries, and setting this continuous variable to it's lower bound. Thus, I think my root node is finding an improper solution that is destroying the search.

    Any suggestions on what to do? I'm thinking that if I generate one Bender's cut outside of the main solve, manually adding it to the problem before the solve is started, then proceeding with the callbacks. This would help, right? Is there a better way to do this?
    #CPLEXOptimizers
    #DecisionOptimization


  • 2.  Re: Help following Bender's Decomposition example (bendersatsp.py)

    Posted Thu November 10, 2011 06:19 PM

    Originally posted by: Eumpfenbach


    Actually after trying and thinking about what I described above, this doesn't work.

    For example, if I have y1, y2, y3, and x1 as a continuous variable representing the solution to the dual problem, if I set my initial choice of y to be (1,0,0) and add a point cut, this does nothing to prevent the solver from choosing y2 and setting x1 to it's lower bound, creating the same problem. I'm kind of lost on what to do.
    #CPLEXOptimizers
    #DecisionOptimization


  • 3.  Re: Help following Bender's Decomposition example (bendersatsp.py)

    Posted Fri November 11, 2011 05:35 PM

    Originally posted by: SystemAdmin


    I'm not sure I'm following this. Dud you add a continuous variable to the supplied example? If so, did you added one or more constraints tying it to the binary variables? If not, it will automatically take its lower bound in any solution.

    Paul

    Mathematicians are like Frenchmen: whenever you say something to them, they translate it into their own language, and at once it is something entirely different. (Goethe)
    #CPLEXOptimizers
    #DecisionOptimization


  • 4.  Re: Help following Bender's Decomposition example (bendersatsp.py)

    Posted Fri November 11, 2011 06:44 PM

    Originally posted by: Eumpfenbach


    Correct, it is taking it's lower bound. It is hard for me to describe. Maybe this will help.

    Please look at trnloc1d.mod at this website

    http://www.ampl.com/NEW/LOOP2/trnloc1d.mod

    The variable I am talking about in this example is "Max_Ship_Cost". It is created to represent the dual objective function in the master objective function (as well as for adding point costs). My model is similar (in that it also has continuous and binary terms in the master objective), so if you could describe how to constrain in terms of that model then I'm sure I could adapt it.

    If solved the ampl way, in the first iteration the objective value is very low (below the true optimal of the base MIP, its a minimization problem), then after each cut is added the model moves closer to feasibility but the objective gets larger. I don't want to solve my master problem multiple times though, as they do. I only want to solve it once. When solving the master once, I don't have the ability to allow an objective lower than what is really possible and tighten on each iteration, since this destroys the branching.
    #CPLEXOptimizers
    #DecisionOptimization


  • 5.  Re: Help following Bender's Decomposition example (bendersatsp.py)

    Posted Wed November 16, 2011 04:32 PM

    Originally posted by: Eumpfenbach


    I am considering trying a scheme where I randomly generate say 50 subproblems (randomly for now, this could probably be improved), generate the cuts, solve the master, and then see if I find the optimal solution (by taking the solution of the master, solving the base MILP with the binary variables fixed to the optimal solution of the master, and seeing if the objective functions are equal).

    Basically, this would be a compromise between solving the master everytime and solving it only once. The negative is that solving the master each time gives us a way of choosing the set of binary variables to solve the slave with. This method wouldn't have that "guidance", since I am trying to avoid solving the master everytime. I'm sure I could come up with a better method for choosing sets of binary variables, but to start, I would try random.

    Just from experience with previous problems, do you think this strategy is likely to perform well? Is it possible it could find the true optimum with fewer cuts than if I solved the master everytime (if I got really lucky). Is this a commonly used method?

    Your opinion is greatly appreciated.
    #CPLEXOptimizers
    #DecisionOptimization


  • 6.  Re: Help following Bender's Decomposition example (bendersatsp.py)

    Posted Thu November 17, 2011 11:46 AM

    Originally posted by: SystemAdmin


    I haven't seen it before (so not commonly used), and it's anybody's guess how it would do. The usual way to avoid solving the master each time is to code the model in a programming language (C++, Java, Python) and use callbacks. Using AMPLE, you could try setting a solver option to stop the solution of the master as soon as an integer feasible solution is found, generate a cut from that solution, and go back to the master.
    Mathematicians are like Frenchmen: whenever you say something to them, they translate it into their own language, and at once it is something entirely different. (Goethe)
    #CPLEXOptimizers
    #DecisionOptimization


  • 7.  Re: Help following Bender's Decomposition example (bendersatsp.py)

    Posted Thu November 17, 2011 02:07 PM

    Originally posted by: Eumpfenbach


    I want to use callbacks in python. My problem is already coded in python. I was only using the AMPL example as a way of helping me describe my problem (since they give an example that has both continuous and binary variables in the objective function).

    I understand callbacks, the ILOG example of bender's decomposition, and the AMPL example of Bender's decomposition. What I still am unclear on is the best use of callbacks to do a Bender's decomposition with continuous variables in the objective function. I cannot formulate the ampl way (with a single continuous variable whos bound is progressively tightened), since this variable destroys the branching process. I could formulate the entire problem in the master (both the binary and continuous variables) and then try and add cuts and hope for a speed improvement. I thought that a lot of the benefit of Bender's decomposition came from splitting the problem into an ILP and an LP, instead of a MILP. I'm skeptical of whether I could compete with cplex this way.

    If you were in my situation and you had two options in front of you: (1) Generate a batch of cuts, solve the master, see if they were enough or (2) solve the full MILP and add cuts using callbacks

    Which one would you bet on (I am fully aware that your opinion comes with no guarantees of any results. I am just going to trust your opinion)? Or feel free to suggest something else if I am overlooking something.
    #CPLEXOptimizers
    #DecisionOptimization


  • 8.  Re: Help following Bender's Decomposition example (bendersatsp.py)

    Posted Thu November 17, 2011 05:37 PM

    Originally posted by: SystemAdmin


    > Eumpfenbach wrote:
    >
    >I cannot formulate the ampl way (with a single continuous variable whos bound is progressively tightened), since this variable destroys the branching process.
    >
    You lost me there. The continuous variable in the master makes it a MILP rather than a pure ILP, but it does not cause any problems with branching. Your lazy constraint callback has to know how to generate both ray cuts (master solution makes subproblem unfeasible) and point cuts (subproblem is feasible but continuous master variable underestimates subproblem cost).

    Paul

    Mathematicians are like Frenchmen: whenever you say something to them, they translate it into their own language, and at once it is something entirely different. (Goethe)
    #CPLEXOptimizers
    #DecisionOptimization


  • 9.  Re: Help following Bender's Decomposition example (bendersatsp.py)

    Posted Thu November 17, 2011 10:46 PM

    Originally posted by: Eumpfenbach


    I am sorry if I wasn't clear. You are correct that the one continuous variable make it an MILP.

    So if I solve the AMPL way, let's say the objective value of the master problem starts at 10, I add a cut, resolve, and it reduces to 9, ..., and it converges to the optimal minimum of 3.

    Now lets say I set up the same problem in python/cplex with callbacks. At the root node it finds the solution of 10 (my LP relaxation tends to find integral solutions, but not always so I can't count on it). Then it branches, finds that solutions lower in the search tree cannot beat a value of 10, and stops. However, this is not the true minimum of the non-decomposed problem. Thus, it converges before I have an opportunity to add cuts.
    #CPLEXOptimizers
    #DecisionOptimization


  • 10.  Re: Help following Bender's Decomposition example (bendersatsp.py)

    Posted Sat November 19, 2011 05:12 PM

    Originally posted by: SystemAdmin


    You have a LazyConstraintCallback in the Python version, right? At the root node, the solution with 10 should cause the callback to be invoked. The callback should add a point cut (still at the root node) cutting off that solution, etc., ad nauseum.

    Paul

    Mathematicians are like Frenchmen: whenever you say something to them, they translate it into their own language, and at once it is something entirely different. (Goethe)
    #CPLEXOptimizers
    #DecisionOptimization


  • 11.  Re: Help following Bender's Decomposition example (bendersatsp.py)

    Posted Sun November 20, 2011 09:25 PM

    Originally posted by: Eumpfenbach


    You would think that would be the case. I can't tell why this is happening (otherwise I wouldn't be on here asking for help), but I can describe the output I am seeing. I print a message everytime lazyconstraintcallback is called and I see this message, but the cuts are not properly removing these solutions. It is converging with the continuous variable at its lower bound, which is not feasible (the lower bound is set arbitrarily low to make sure it doesn't cut off any feasible solutions).

    I am attaching my file in case anyone feels inclined to take a look at it. To help with understanding the file (because it is fairly long), I will give an overview. First I input my data. Then I define several functions (from buildmatrices to buildmatricesmaster) which I use to generate cuts and matrices used in the solving based on the results of the slave and master problems. Then I define the classes used in the benders decomposition.

    If there is an error, it is very likely in the classes (from BendersLazyConsCallback down), since I have tested the functions above with a bender's decomposition where I solve the master to optimality everytime, and it produced the correct solution. This narrows it down to about 150 lines where an error could be (the file is 1500 lines). I am very confident that I am creating the correct cuts (although I wouldn't be my life on it), since my other attempt converged to the right solution.

    I also have a question about the callback procedure. So let's say the first node it solves finds a solution or 10 (before the cut), but the true solution should have been 5 (with the extra 5 being the difference between the true dual objective value and it's bound). Then I add a bender's point cut. If I resolved the node with the cut added, it would find an objective value of 5. Does cplex resolve the node, though? It could be the optimal solution, but not until the continuous variable representing the slave objective function has been properly constrained. Thus, it would require the node to be solved twice I believe. I guess part of my problem is that I was taught Branch-and-Bound, but never Branch-and-Cut. I'm not comfortable with it.

    I try to say this in every message -- Thanks a lot.
    #CPLEXOptimizers
    #DecisionOptimization


  • 12.  Re: Help following Bender's Decomposition example (bendersatsp.py)

    Posted Mon November 21, 2011 09:04 AM

    Originally posted by: SystemAdmin


    I'm reporting below a quick and basic explanation of Benders' decomposition. I would suggest you to check if your decomposition and your implementation are correct from a theoretical point of view.

    Hope this can help you.

    Suppose this is the MILP you want to solve:

    min c x + d y
    A x >= b
    T x + Q y >= q
    x >= 0, integer
    y >= 0, continuous

    You decompose the problem and you get:

    Master:

    min c x + eta
    A x >= b
    x >= 0, integer
    eta >= p (q - Tx), for all p vertices of the dual subproblem
    r (q - Tx) <= 0, for all r rays of the dual subproblem

    Here the variables are x and eta.

    eta is a continuous variable that takes into account the objective function d y. Eventually, if you know a valid lower bound on eta (e.g. eta >= 0), you can add it to the master problem.

    'eta >= p (q - Tx) and r (q - Tx) <= 0' are the Benders' cuts.

    Those cuts are exponentially many, so you need to solve the master with cut callbacks. Your callbacks will separate those cuts by solving a 'dual subproblem' (see later). You may decide if you want to separate Benders' cuts only as lazy constraints or even as user cuts (see the example bendersatsp.py).

    More precisely, 'eta >= p (q - Tx)' are your 'point cuts' (p are the points you will find solving the dual subproblem), and 'r (q - Tx) <= 0' are your 'ray cuts' (r are the rays will find solving the dual subproblem).

    Initially, the list of available cuts is typically empty, so the initial master will be just

    min c x + eta
    A x >= b
    x >= 0, integer
    plus a bound on eta if you know a valid one.
    Dual subproblem (to be solved from within cut callbacks)

    max u (q - T x*)
    u Q <= d
    u >= 0

    Here the variables are u, while x* is the solution you obtain by solving the master.

    Note that this is just the dual of the part of the MILP that you have 'removed':

    min d y
    Q y >= (q - T x)
    y >= 0
    where x is fixed (x == x*)

    Within the branch-and-cut, you need to solve the dual subproblem by using cut callbacks to see if the current solution of the master, (x*, eta*), is feasible and optimal for your problem.
    Again, you can decide to use only the lazy constraint callback, if you want to separate Benders' cuts only to cut off integer x solutions, or you can use both the lazy constraint callback and the user cut callback if you want to separate Benders' cuts to cut off both integer and fractional solutions.

    Whenever you solve the dual subproblem, you will have two possibilities:

    1. The dual subproblem is bounded, the optimal solution is a vertex u* and the optimal solution value is z* = u* (q - T x*).
    If z* <= eta* then STOP (x*, eta*) is feasible and optimal for the master.
    Otherwise (z* > eta*), you have found a violated 'point cut' eta >= p (q - Tx), where p == u*.
    You can add this cut to the master from within your cut callback.

    2. The dual subproblem is unbounded. In such a case you can get the unbounded ray u* of the dual, that corresponds to a violated 'ray cut' r (q - Tx) <= 0, where r == u*.
    As before you can add this cut to the master from within your cut callback.
    #CPLEXOptimizers
    #DecisionOptimization


  • 13.  Re: Help following Bender's Decomposition example (bendersatsp.py)

    Posted Mon November 21, 2011 10:12 AM

    Originally posted by: Eumpfenbach


    Thanks for your post. What you have described above is the approach that I have taken. If I solve using a loop in python or matlab, I find the correct solution (where I alternate between solving the master and slave, solve the master problem to optimality every time, add a cut, and repeat until it converges).

    I am hesitant to speak in absolutes when I am not sure what I am doing wrong, but I am pretty certain my problem is not with the theoretical aspects of Bender's Decomposition (given that I can solve the problem without callbacks slowly but correctly). This is my first time experimenting with the callbacks, I have no training with them outside of the manuals, and neither has my advisor. I am pretty sure my problem is that I am not using them correctly. I tried to learn and copy the example as closely as I could, but something is not right and I am not sure what.
    #CPLEXOptimizers
    #DecisionOptimization


  • 14.  Re: Help following Bender's Decomposition example (bendersatsp.py)

    Posted Mon November 21, 2011 11:46 AM

    Originally posted by: SystemAdmin


    If you have an implementation with no callbacks that works (a loop that calls the master MIP and the worker LP separately), you can use it to 'debug' your callback implementation to figure out what's wrong.

    Few suggestions:

    • select the smallest possible instance of your problem where your callback implementation does not work as expected.

    • use only the lazy constraint callback (in this way you have to deal only with integer solutions of the master and everything should be easier to understand).

    When the lazy constraint callback is called, you can

    • retrieve and print the integer solution (x*, eta*) that you will try to cut off.

    • build and print the worker LP for this solution

    • solve the worker LP and print the solution (and the solution status) you obtain.

    • print the cut (if any) that you are constructing from the worker LP solution

    • ...

    In this way you can check what you are obtaining from your 'callback implementation' with respect to what you obtain from your correct 'no-callback implementation'. And maybe you can figure out what's going wrong with your callback implementation.
    #CPLEXOptimizers
    #DecisionOptimization


  • 15.  Re: Help following Bender's Decomposition example (bendersatsp.py)

    Posted Mon November 21, 2011 12:11 PM

    Originally posted by: Eumpfenbach


    Good thought -- Print the cut from both cases and make sure they are generating the same one. Will post the results of this.
    #CPLEXOptimizers
    #DecisionOptimization


  • 16.  Re: Help following Bender's Decomposition example (bendersatsp.py)

    Posted Mon November 21, 2011 08:52 PM

    Originally posted by: Eumpfenbach


    Making some progress. I found an error that was causing my subproblem to not update properly with the new binary variables. I am now seeing cplex search outside of the root node.

    Now I am seeing a new problem, however. I constrain my master problem so that I never have an infeasible/unbounded subproblem. Later I will need to relax this, but this is just start small and work my way up. So without going into specifics of my problem, say I have a constraint that Sum (*y*) = 2, where y is a vector of binary variables. Cplex searches, generates cuts, finds the correct optimal solution as an incumbent (the objective I get when I solve without decomposition), but then after a while generates a y where Sum (*y*) = 1. This is not infeasibility caused by the decomposition process. It is infeasible because cplex is ignoring a constraint of the master. It only happens 1% of the time or so.

    When this happens, cplex then starts finding solutions that have higher objective values than the true optimal solution which it eventually calls the solution. Is this normal? I know I will eventually need to bring in ray cuts when the subproblem is infeasible, but I shouldn't have to worry about an infeasible master, right?
    #CPLEXOptimizers
    #DecisionOptimization


  • 17.  Re: Help following Bender's Decomposition example (bendersatsp.py)

    Posted Mon November 21, 2011 10:09 PM

    Originally posted by: Eumpfenbach


    I would also add that when what I described above happens, I print the results of get_ray() and it is all zeros.
    #CPLEXOptimizers
    #DecisionOptimization


  • 18.  Re: Help following Bender's Decomposition example (bendersatsp.py)

    Posted Tue November 22, 2011 10:39 AM

    Originally posted by: SystemAdmin


    I did not follow the discussion, so please be patient with me ;)

    Your master problem is a MIP, right? And you claim that when you solve this MIP at some point during your BD algorithm, the solution that it finds violates one of the constraints?
    Can you write out the MIP as a .sav file and the parameters that you are using as a .prm file and try to reproduce the behavior in the interactive?
    Tobias
    #CPLEXOptimizers
    #DecisionOptimization


  • 19.  Re: Help following Bender's Decomposition example (bendersatsp.py)

    Posted Tue November 22, 2011 01:04 PM

    Originally posted by: Eumpfenbach


    Yes, that is what I am saying. It is finding a solution that violates one of the constraints.

    Forgive me for asking what might be a very elementary question, but by "reproducing the behavior in the interactive" do you mean formulate it in the "Cplex studio IDE" and generate the save files? My approach so far has been to formulate in AMPL (that's just the modeling language I learned first) and to solve with callbacks through the Python IDE. To formulate in the IDE would take a long time, and in addition, you can't use the callbacks there, right? I need the callbacks to reproduce the problem.

    I will attach my files, and try to facilitate debugging as much as I can. I hope this is enough.

    main_callbacks_v4.py is my main file. master_nov.mps creates my master problem. slave2.mps creates the slave.

    You will see around line 1500 I add some additional constraints to the master model that solve a reduced master problem, but guarantee that every solution satisfying these constraints will have a bounded dual subproblem. The only ones that are important are the first 4 constraints. They add the constraint that variables mvar0 + mvar2 + mvar4, ... mvar30 = 2 and mvar1 + mvar3 + ... mvar31 = 2 (here I am using mvar# to represent the #th variable of the master problem). I use the function populate_y_variables() to take the solution of the master problem and create an array that looks something like this:

    y1 =
    [0,0
    0,0
    .
    .
    .
    1,1
    1,1]

    Essentially, the minimum you need to know is that the sum of each column of y1 should equal two. This is the constraint being violated.

    I run the file (using the command "python main_callbacks_v4.py > out.txt" so that it dumps the output to a file and makes everything readable). It has some lines to print y1 each time it solves the dual problem. Most of the time, the columns sum to two. Occasionally, it is finding some that sum to 1.

    The optimal solution is 808027, which it finds for a period of time as the incumbent. It finds the first of these infeasible solutions without destroying the problem (output file line 864), but the second one causes it to reach a solution it never should reach (output file line 2105, superoptimal for the non-decomposed problem). In the last couple lines of the output I print the "optimal" y1, which has the sum of the columns summing to 2. However, "last" (which is a continuous variable representing the objective contribution of the subproblem) has a true value of -868,000, when it's true lower bound should be around -400,000 (the problem is a max with this variable having an objective coefficient of -1).

    I am using python 2.6 and cplex 12.3.

    I hope this is enough to recreate the problem. If not, I can create more files or whatever. I just need a little more instruction.

    Thanks!
    #CPLEXOptimizers
    #DecisionOptimization


  • 20.  Re: Help following Bender's Decomposition example (bendersatsp.py)

    Posted Tue November 22, 2011 01:04 PM

    Originally posted by: Eumpfenbach


    master mps file
    #CPLEXOptimizers
    #DecisionOptimization


  • 21.  Re: Help following Bender's Decomposition example (bendersatsp.py)

    Posted Tue November 22, 2011 01:05 PM

    Originally posted by: Eumpfenbach


    slave mps file
    #CPLEXOptimizers
    #DecisionOptimization


  • 22.  Re: Help following Bender's Decomposition example (bendersatsp.py)

    Posted Tue November 22, 2011 04:25 PM

    Originally posted by: SystemAdmin


    It seems that your subproblem slave2.mps is unbounded. May this be the issue? Are you checking the solution status of your sub-problem solves before querying the solution? Maybe, you are just using the unbounded ray or the last basic solution before unboundedness was detected without noticing it.

    Tobias
    #CPLEXOptimizers
    #DecisionOptimization


  • 23.  Re: Help following Bender's Decomposition example (bendersatsp.py)

    Posted Tue November 22, 2011 05:16 PM

    Originally posted by: Eumpfenbach


    This is the code I use:

    
    def separate(self, sol, x): tuple = populate_y_variables(sol) y1, y3, y10, y11, y28, y40, y41, y42, y43, y44, y45, y46, y47, y48, y49, y50, y51, y52, y53, y54, y55, y56, y57, y58, y59, y60, y61, y62, y66 = tuple dual = self.dual violatedCutFound = False tuple = buildmatrices(y1, y3, y10, y11, y28, y40, y41, y42, y43, y44, y45, y46, y47, y48, y49, y50, y51, y52, y53, y54, y55, y56, y57, y58, y59, y60, y61, y62, y66) dualf, dualB = tuple dual.objective.set_linear(zip(range(0,dual.variables.get_num()),dualf)) dual.linear_constraints.set_rhs(zip(range(0,dual.linear_constraints.get_num()),dualB)) dual.solve() print 
    "status = " + str(dual.solution.get_status()) 
    
    if dual.solution.get_status() == dual.solution.status.unbounded: print 
    "xxxxxxxxxxxxxx   unbounded subproblem xxxxxxxxxxxxxxxxxx" print 
    "y1 = " + str(y1) ray = dual.solution.advanced.get_ray() print str(ray) cut_type = 0 tuple = populatedualvariables2(ray) dcon10, dcon11, dcon18, dcon19, dcon20, dcon21, dcon22, dcon92, dcon94, dcon95, dcon96, dcon97, dcon98, dcon99, dcon100, dcon101, dcon102, dcon103, dcon104, dcon105, dcon106, dcon107 = tuple tuple = buildmatricesmaster(dcon10, dcon11, dcon18, dcon19, dcon20, dcon21, dcon22, dcon92, dcon94, dcon95, dcon96, dcon97, dcon98, dcon99, dcon100, dcon101, dcon102, dcon103, dcon104, dcon105, dcon106, dcon107, cut_type) cut_array, value = tuple cutLhs = cplex.SparsePair(ind = range(0,649), val = cut_array) cutRhs = value self.cutLhs = cutLhs self.cutRhs = cutRhs violatedCutFound = True 
    
    else: print 
    "y1 = " + str(y1) tuple = populatedualvariables(dual) cut_type = 1 dcon10, dcon11, dcon18, dcon19, dcon20, dcon21, dcon22, dcon92, dcon94, dcon95, dcon96, dcon97, dcon98, dcon99, dcon100, dcon101, dcon102, dcon103, dcon104, dcon105, dcon106, dcon107 = tuple tuple = buildmatricesmaster(dcon10, dcon11, dcon18, dcon19, dcon20, dcon21, dcon22, dcon92, dcon94, dcon95, dcon96, dcon97, dcon98, dcon99, dcon100, dcon101, dcon102, dcon103, dcon104, dcon105, dcon106, dcon107, cut_type) cut_array, value = tuple cutLhs = cplex.SparsePair(ind = range(0,649), val = cut_array) cutRhs = value self.cutLhs = cutLhs self.cutRhs = cutRhs violatedCutFound = True 
    
    return violatedCutFound
    


    So yes, I am checking the solution status of the subproblem (the if/else statement above). The previous code that I had attached was not designed to deal with infeasible subproblems, because I constrained the master so that I would never have an infeasible subproblem. The new one that I attached has code to deal with the possibility of an infeasible subproblem. It now converged to the proper solution. There is still something goofy going on that I think I need to get to the bottom of to have confidence everything is right.

    In plain terms, this is what I am seeing:

    1) I add constraints to guarantee that only feasible solutions are considered.
    2) 99/100 times, the constraints are satisfied. 1/100 times I see that they aren't.
    3) only in these cases do I find an unbounded subproblem.
    4) If I add "ray" cuts as usual, these infeasible choices of binary are removed and the optimal solution is found. Still it is troubling to see cplex ignoring constraints in my master problem.

    I will triple check that these solutions that I am calling infeasible, really are infeasible. I am pretty confident that the mistake is not in my modeling of the master problem, though.
    #CPLEXOptimizers
    #DecisionOptimization


  • 24.  Re: Help following Bender's Decomposition example (bendersatsp.py)

    Posted Tue November 22, 2011 06:49 PM

    Originally posted by: Eumpfenbach


    Ooookay so the master is solving correctly. I solve the master, take the solution vector, and then split it into variables (y1, y2, etc...) that I then use to generate the objective and rhs vectors for the dual subproblem. For some strange reason, this is not being done correctly (translating the vector into the smaller arrays for use). I am 99.9999% sure my code for this function is right. What could cause this to happen? Could it be some kind of hardware / memory error?
    #CPLEXOptimizers
    #DecisionOptimization


  • 25.  Re: Help following Bender's Decomposition example (bendersatsp.py)

    Posted Wed November 23, 2011 04:20 AM

    Originally posted by: SystemAdmin


    So, you are saying that the master problem is solved correctly by CPLEX, but some error gets introduced when you map the CPLEX solution vector you your (smaller sized) y array. Is this correct?

    Can you post the piece of code that does this querying of the CPLEX solution vector and the assignment to the y array?

    Are you aware of the fact that a MIP solution is not necessarily strictly integral? It can happen that the solution values for a binary variable are only within 1e-5 (the default integrality tolerance) of 0 or 1, so if you are doing something like

    if ( cpxsol[j] == 1 ) y[j] = 1;
    


    you will get a wrong result. You need to check for

    if ( cpxsol[j] >= 0.5 ) y[j] = 1;
    

    Tobias
    #CPLEXOptimizers
    #DecisionOptimization


  • 26.  Re: Help following Bender's Decomposition example (bendersatsp.py)

    Posted Wed November 23, 2011 10:24 AM

    Originally posted by: Eumpfenbach


    yes, that is what I am saying. I create a set of global variables to mark where each smaller variables starts in the master solution vector (appended because you don't need to see all the variables. Just a sample should be fine. y1 is one that is giving me an error (but not the only one)):

    
    y1_start = 0 y3_start = 32 y10_start = 46 . . .
    


    I have a class for the lazy constraint callback:

    
    
    
    class BendersLazyConsCallback(LazyConstraintCallback): def __call__(self): print 
    "xxxxxxxx Lazy Cut xxxxxxxxxxxxxxx" x        = self.x workerLP = self.workerLP numNodes = len(x) sol = [] 
    
    for i in range(numNodes): sol.append([]) sol[i] = self.get_values(x[i]); 
    
    if workerLP.separate(sol, x): self.add(constraint = workerLP.cutLhs, sense = 
    "L", rhs = workerLP.cutRhs)
    


    My WorkerLP class (contents of __init__ omitted because they aren't needed:

    
    
    
    class WorkerLP: def __init__(self): ....   def separate(self, sol, x): tuple = populate_y_variables(sol) y1, y3, y10, y11, y28, y40, y41, y42, y43, y44, y45, y46, y47, y48, y49, y50, y51, y52, y53, y54, y55, y56, y57, y58, y59, y60, y61, y62, y66 = tuple
    


    And the function populate_y_variables:

    
    def populate_y_variables(sol): temp_master = sol marker = y1_start 
    
    for i in configs: 
    
    for j in demand_regions: y1[i,j] = temp_master[marker] marker = marker +1 y3[0] = temp_master[32] y3[1] = temp_master[33] y3[2] = temp_master[34] y3[3] = temp_master[35] y3[4] = temp_master[36] y3[5] = temp_master[37] y3[6] = temp_master[38] y3[7] = temp_master[39] y3[8] = temp_master[40] y3[9] = temp_master[41] y3[10] = temp_master[42] y3[11] = temp_master[43] y3[12] = temp_master[44] y3[13] = temp_master[45] y10 = temp_master[46] y11 = temp_master[47] marker = y28_start 
    
    for i in components: 
    
    for j in component_suppliers: 
    
    for k in break_points: 
    
    if k == 0: y28[j,0,i] = temp_master[marker] 
    
    else: y28[j,1,i] = temp_master[marker] marker = marker +1 . . .
    

    #CPLEXOptimizers
    #DecisionOptimization


  • 27.  Re: Help following Bender's Decomposition example (bendersatsp.py)

    Posted Mon November 28, 2011 11:38 AM

    Originally posted by: Eumpfenbach


    I realize you guys are probably busy catching up from the holiday weekend, but I'm still stuck with this error. I don't think it's my code, because I have verified that it functions correctly 99% of the time (by printing the solution of the master, breaking it up into the smaller variables in excel, and verifying they match what was printed within the output from python).

    For some reason, though, it is not functioning the other 1% of the time. I can't really make any progress with my problem until I figure this out.

    Thanks.
    #CPLEXOptimizers
    #DecisionOptimization


  • 28.  Re: Help following Bender's Decomposition example (bendersatsp.py)

    Posted Tue November 29, 2011 07:41 AM

    Originally posted by: SystemAdmin


    Hi.

    I'm not sure I understand your problem.
    So please allow me to try to recap and tell me if/where I'm wrong.

    You have a complete formulation F with integer and continuous variables.
    You decompose it and you get
    • a master M with integer variables only (plus a continuous variable 'eta' that takes into account the objective function of the continuous variables that you have removed).
    • a worker LP (dual subproblem D) that is a pure linear program.

    You solve the problem with Benders' cuts added with the Lazyconstraint callback.

    Whenever the lazyconstraint callback is called:
    • you get optimal solution of the current master (say 'sol'), that is always integer (integer according to CPLEX tolerances) because you are only using the lazy constraint callback.
    • you build the dual subproblem D: the constraints and the rhs of D are fixed, only the objective function of D depends on 'sol', right?
    • you solve D to look for a violated cut and (if you find a cut) your lazyconstraint callback adds the cut to the current master.

    You say that there is a case when
    • the current solution 'sol' is not a feasible solution for your problem (i.e., for the formulation F)
    • but the dual subproblem is not finding any violated cut
    Therefore, at the end the answer is wrong because the optimal solution you get is not feasible.

    And you are sure that the problem is because the dual D is not constructed correctly:
    Some error gets introduced when you map the CPLEX solution vector of the master (i.e., 'sol') to your (smaller sized) y array.

    If I understand, the y array you build from 'sol' should only affect the objective function of D.
    Therefore your problem can be summarized as follows:
    Because you know 'sol' is infeasible, you expect D to be unbounded (so that you can get a 'ray' cut), but it's instead bounded and you don't find the cut.

    Everything correct or I'm not understanding your problem?
    #CPLEXOptimizers
    #DecisionOptimization


  • 29.  Re: Help following Bender's Decomposition example (bendersatsp.py)

    Posted Tue November 29, 2011 10:40 AM

    Originally posted by: Eumpfenbach


    Pretty close, but not exactly.

    I solve the master, and I always get a feasible solution to the complete problem (I am adding constraints I really shouldn't know so that I don't have to deal with infeasibility at the moment). There is no problem with the solution to the master.

    Then I take the master and break it up into smaller variables, which make it easier to formulate the dual RHS and objective function, which depend on the solution to the master. However, something is happening where the master solution is not being translated into the smaller variables correctly.

    So, for example if sol[0] = 1 (the first variable in the master problem), then y10,0 should equal 1. 99% of the time it makes this translation correctly. 1 time out of 100 I see sol[0] = 1 and y10,0 = 0.

    The problem is not that it is finding subproblems that should be unbounded as bounded, it is finding problems that should be bounded as unbounded (because of the translation error). I wouldn't bet my life that there is not an error in my formulation of the Dual RHS and objective function, but I am pretty confident they are correct. Even if there was an error there, it doesn't explain why I am observing the error I mentioned above, which I have carefully checked through "print" statements to check information. My thinking was to try and fix the error that I am sure is there before I search for other possible errors.

    When I track my objective function, it sometimes finds the correct solution, hits one of these strange unbounded problems, and then comes up with a solution that is superoptimal to the original problem. Obviously, it should add a ray cut that removes this choice of binary variables from consideration, but it isn't happening. I can't explain why not, but I am trying to just attack the error I have witnessed before searching for others. Maybe they are separate problems, maybe they have the same root cause.

    I have attached my most recent formulation if you want to witness the output (when run with the above .mps files). The optimal solution should be 808,000 (I can't remember all the smaller digits) but it is finding an optimal solution of 865631. You might want to turn on and off some of the print statements (if you decide to run it). The solution of the master problem when broken up is [y10,0, y10,1, y11,0, y11,1, ..., y115,0, y1http://15,1], ... other variables. You will notice that y1 does not match up with the master solutions in this way, though, when the unbounded subproblem error occurs.

    Thanks,
    #CPLEXOptimizers
    #DecisionOptimization


  • 30.  Re: Help following Bender's Decomposition example (bendersatsp.py)

    Posted Tue November 29, 2011 10:43 AM

    Originally posted by: Eumpfenbach


    I guess this board wants to turn a bunch of my statements with brackets into html links. I think it is still readable. Just put brackets around those numbers (since that is how python handles arrays).
    #CPLEXOptimizers
    #DecisionOptimization


  • 31.  Re: Help following Bender's Decomposition example (bendersatsp.py)

    Posted Wed November 30, 2011 10:29 AM

    Originally posted by: SystemAdmin


    So basically you are saying that something wrong happens in the function
    populate_y_variables(sol)
    
    right?

    This is the (first part of the) function:
    def populate_y_variables(sol):
        temp_master = sol
        marker = y1_start
        for i in configs:
            for j in demand_regions:
                y1[i,j] = temp_master[marker]
                marker = marker +1
        ...
    


    And you say that, when you get the unbounded dual subproblem (i.e., the wrong translation that generates the wrong unbounded dual subproblem), you observe that
    y1_start == 0
    configs == [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15]
    demand_regions == [0,1]
    sol[0] == 1
    y1[0,0] == 0
    

    Correct?
    And instead you want
    y1[0,0] == sol[0] == 1
    
    .

    This would be strange, but apparently it has nothing to do with CPLEX.
    #CPLEXOptimizers
    #DecisionOptimization


  • 32.  Re: Help following Bender's Decomposition example (bendersatsp.py)

    Posted Wed November 30, 2011 10:54 AM

    Originally posted by: Eumpfenbach


    That is exactly what I am seeing. I repeated it on two computers and got the same results.

    What should I do? I put a lot of time into this and can't give up now. I am not a computer scientist but is it possible that the stack of operations is getting messed up and it is creating the dual problem before it has finished creating the binary variables? I'm just reaching for any possible explanation...

    If the error is something with python, where should I go? I haven't had any luck finding a central help board like this one has for cplex. It seems like it is a bunch of ad hoc boards all over the internet that rarely get checked.
    #CPLEXOptimizers
    #DecisionOptimization


  • 33.  Re: Help following Bender's Decomposition example (bendersatsp.py)

    Posted Wed November 30, 2011 12:19 PM

    Originally posted by: SystemAdmin


    I'm really not a python expert, but maybe you are doing something wrong with the list y1.
    Please, add some output to your function populate_y_variables(sol):

    def populate_y_variables(sol):
        temp_master = sol
        marker = y1_start
        print "temp_master: " + str(temp_master)
        print "y1 before: " + str(y1) 
        for i in configs:
            for j in demand_regions:
                y1[i,j] = temp_master[marker]
                marker = marker +1
        print "y1 after: " + str(y1) 
        ...
    


    And post here the exact output you get the first two times that the function is called and the time when you get the wrong/unbounded subproblem.
    #CPLEXOptimizers
    #DecisionOptimization


  • 34.  Re: Help following Bender's Decomposition example (bendersatsp.py)

    Posted Wed November 30, 2011 12:54 PM

    Originally posted by: Eumpfenbach


    The first two times:

    
    xxxxxxxx Lazy Cut xxxxxxxxxxxxxxx temp_master: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, -400000.0] y1 before:   [[0 0] [0 0] [0 0] [0 0] [0 0] [0 0] [1 0] [0 0] [0 0] [0 0] [0 0] [1 0] [0 0] [0 0] [0 1] [0 1]] y1 after:   [[0 0] [0 0] [0 0] [0 0] [0 0] [0 0] [0 0] [0 0] [0 0] [0 0] [0 0] [1 0] [0 0] [0 0] [0 1] [1 1]] status = 1 xxxxxxxx Lazy Cut xxxxxxxxxxxxxxx temp_master: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, -6.9388939039072284e-18, 0.0, 0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 6.9388939039072284e-18, -139099.54783597629] y1 before:   [[0 0] [0 0] [0 0] [0 0] [0 0] [0 0] [0 0] [0 0] [0 0] [0 0] [0 0] [1 0] [0 0] [0 0] [0 1] [1 1]] y1 after:   [[0 0] [0 0] [0 0] [0 0] [0 0] [0 0] [0 0] [0 0] [0 0] [0 0] [0 0] [1 0] [0 0] [0 0] [0 1] [1 1]]
    


    which is as expected. Now an unbounded problem:

    
    xxxxxxxx Lazy Cut xxxxxxxxxxxxxxx temp_master: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.99999999999999956, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.9428902930940239e-16, 0.0, -2.7755575615628914e-17, 0.0, 0.99999999999999989, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.99999999999999889, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, -8.3266726846886741e-17, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, -2.7755575615628914e-17, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.99999999999999967, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, -5.2735593669694936e-16, 0.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.5092094240998222e-16, 0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, -400000.0] y1 before:   [[0 0] [0 0] [0 0] [0 0] [0 0] [0 0] [0 0] [0 0] [0 0] [1 0] [0 1] [1 0] [0 0] [0 0] [0 0] [0 1]] y1 after:   [[0 0] [0 0] [0 0] [0 0] [0 0] [0 0] [0 0] [0 0] [0 0] [0 0] [0 1] [0 0] [0 0] [0 0] [0 0] [0 1]]
    


    So it appears that when I run my code, the error occurs when the solution vector has a number like .9999999999999 in it instead of one. This is puzzling because my code should still set that part of y1 equal to .9999999999999, not zero. I don't have a statement like if sol == 1 then y1 == 1 (I am leaving out the indices so board doesn't think I am typing html).

    I am going to try adding a step where I round the solution vector to be equal to exactly 1 or zero before I split it up and see what happens.
    #CPLEXOptimizers
    #DecisionOptimization


  • 35.  Re: Help following Bender's Decomposition example (bendersatsp.py)

    Posted Wed November 30, 2011 02:04 PM

    Originally posted by: Eumpfenbach


    That seems to have fixed it. Still makes absolutely no sense why those variables were being set to 0 and not .999999999999 but whatever.

    Thanks.
    #CPLEXOptimizers
    #DecisionOptimization


  • 36.  Re: Help following Bender's Decomposition example (bendersatsp.py)

    Posted Wed November 30, 2011 04:29 PM

    Originally posted by: Eumpfenbach


    OK so moving on, is there a way to make my problem multithreaded now? I have found nothing online about multithreading cplex within python. I don't really even see that much for the C API.
    #CPLEXOptimizers
    #DecisionOptimization


  • 37.  Re: Help following Bender's Decomposition example (bendersatsp.py)

    Posted Mon December 05, 2011 07:55 AM

    Originally posted by: SystemAdmin


    What do you mean "make my problem multi-threaded"? CPLEX automatically uses as many threads as you have cores, unless you explicitly ask it to use fewer threads. So CPLEX already solves your problem using multiple threads.
    Or do you want to solve multiple slave problems in parallel? In that case you need to create and run the threads in Python (and should refer to some Python forum or manual).
    #CPLEXOptimizers
    #DecisionOptimization


  • 38.  Re: Help following Bender's Decomposition example (bendersatsp.py)

    Posted Mon December 05, 2011 12:41 PM

    Originally posted by: Eumpfenbach


    The comment in the bender's example:

    # Set the maximum number of threads to 1.
    # This instruction is redundant: If MIP control callbacks are registered,
    # then by default CPLEX uses 1 (one) thread only.
    # Note that the current example may not work properly if more than 1 threads
    # are used, because the callback functions modify shared global data.
    # We refer the user to the documentation to see how to deal with multi-thread
    # runs in presence of MIP control callbacks.

    but I don't see any documentation on this for Python. Just looking for a little more information about what is necessary. My decomposition is running slower than standard cplex and I need to do something to aid it.
    #CPLEXOptimizers
    #DecisionOptimization


  • 39.  Re: Help following Bender's Decomposition example (bendersatsp.py)

    Posted Tue December 06, 2011 06:46 AM

    Originally posted by: SystemAdmin


    The main issue with user callbacks and parallel execution is that you may access global data concurrently in your callback.

    Basically, what you need to do is just realize that your callback method can be called concurrently on multiple threads and take action to avoid the problems that arise from this fact. Usually, you would protect access to global data by a lock, for example a mutex. Please refer to Python documentation how to do this in Python.

    Note also that this means in particular, that you have to have individual LP problems and environments for each thread to solve your subproblems. Having only a single LP and environment would mean that CPLEX would then operate concurrently on the same LP/env, which is a very bad idea.

    The easiest solution to all of these issue is to protect the whole callback function with a lock, such that only one incarnation of your callback method can be called at the same time. But of course, this can result in a performance bottleneck.

    Then, if you are sure that your callback function can deal with being executed concurrently, you can set the threads parameter to a positive value to get parallel execution.

    Tobias
    #CPLEXOptimizers
    #DecisionOptimization


  • 40.  Re: Help following Bender's Decomposition example (bendersatsp.py)

    Posted Tue December 06, 2011 11:43 AM

    Originally posted by: Eumpfenbach


    I appreciate your answer. I have to do some reading to understand it fully, though. This is all new to me. Might have a follow up after I am more confident.
    #CPLEXOptimizers
    #DecisionOptimization


  • 41.  Re: Help following Bender's Decomposition example (bendersatsp.py)

    Posted Tue November 22, 2011 09:11 PM

    Originally posted by: Eumpfenbach


    I ran this on another desktop (Linux, 64 bit, python 2.6, ILOG 12.3) and the same error happened. I'm stumped on this one...
    #CPLEXOptimizers
    #DecisionOptimization


  • 42.  Re: Help following Bender's Decomposition example (bendersatsp.py)

    Posted Thu November 17, 2011 11:38 AM

    Originally posted by: SystemAdmin


    MaxShipCost is a variable in the master problem that acts as a surrogate for the objective contribution of the variables in the subproblem. Initially it will be unconstrained, so the first time you solve the master it will hit its lower bound. Eventually the master will send a feasible solution to the subproblem, at which time you should generate a "point" cut (based on the dual solution to the subproblem). The point cut (added to the master) will bound MaxShipCost below by a linear function of the other matter variables.

    Paul

    Mathematicians are like Frenchmen: whenever you say something to them, they translate it into their own language, and at once it is something entirely different. (Goethe)
    #CPLEXOptimizers
    #DecisionOptimization