Decision Optimization

 View Only

Manual Benders Implementation on docplex

  • 1.  Manual Benders Implementation on docplex

    Posted Wed March 15, 2023 10:37 AM

    I am using a manual Benders implementation for a problem. For that, I need to store certain variables since I need to transfer them to sub or master problems. However, as the size of the problem gets larger, it becomes really slow. I define those values as numpy arrays as follows:

        alpha_bar_bd = np.zeros((demand,facility))
        beta_bar_bd = np.zeros((demand))

    and update them in each iteration as follows:

        
        for i in range(1, demand + 1):
        beta_bar_bd[i-1] = beta_bd[i].solution_value
            for j in range(1, facility + 1):
                alpha_bar_bd[i-1, j-1] = alpha_bd[i, j].solution_value
        
    I think it could worth mentioning that most of these array and solution values are zeros. Only a subset of facilities are to be selected, and for those selected facilities only some of the values take value different than zero. Is there any way to focus only on nonzero elements and safely update without information loss/mistakes.

    Additionally, I define the CPLEX variables as dictionaries:

        # Define sets
        alpha_bd = [(i, j) for i in range(1, demand+1) for j in range(1, facility + 1)]
        
        # Define variables
        alpha_bd = m_bd.continuous_var_dict(alpha_bd, name="alpha")


    The greatest time loss occurs during the following operation:

          m_bd_ub.add_constraint(theta_bd <= sum((u_bd[j] * alpha_bd[i, j].solution_value) for i in range(demand) for j in range(facility)) + sum(((-u_bd[j] + 1) * pi_bd[i, j].solution_value)for i in range(demand) for j in range(facility)) + sum((y_bd[j] * eta_bd[i,zeta_bd[i].solution_value for i in range(demand)), ctname='new constraint')

     - Is there anything I can do to make this process run faster?
       
     - I know obtaining Pareto optimal cuts could improve the convergence speed. However, if the core points are not well-selected, it may increase the computation time rather than improving it. Is there a way to select core points effectively?
       

     - I also tried defining variables in a numpy array as follows. However,
       it did not have any impact on the speed. Would this be expected? I am
       not sure if I am doing it wrong.

    alpha_bd = np.array([[m_bd_lb.continuous_var(name='alpha_bd_{0}_{1}'.format(i, j), lb=0) for j in range(1, facility + 1)] for i in range(1, demand + 1)])



    ------------------------------
    Cem Can
    ------------------------------