Decision Optimization

Expand all | Collapse all

Relaxed Lexicographic Optimization

  • 1.  Relaxed Lexicographic Optimization

    Posted 3 days ago

    Dear all,

    I'm exploring alternatives the weighted sum approach we're currently using for multi-objective optimization. I realize that lexicographic optimization is possible in CPO using the minimize_static_lex method.

    I wonder if it is possible to implement a relaxed version of the lexicographic method:

    1. min f1 => F1
    2. min f2 s.t. f1 <= F1 * d

    where f1, f2 are objective functions and F1 is the minimal value obtained in the first step. The parameter d >= 1 would then relax the constraint.

    Is that possible in CPO?

    Sebastian Bayer

  • 2.  RE: Relaxed Lexicographic Optimization

    Posted 2 days ago
    Hello Sebastian,

    Sure what you suggested is possible, but why don't use minimize_static_lex?
    What is your need which wouldn't be addressed by multi-criteria objectives?


    Christiane Bracchi

  • 3.  RE: Relaxed Lexicographic Optimization

    Posted 2 days ago

    Dear Christiane,

    many thanks for your swift reply!

    I'm in a situation where I have a clear preference between two objectives, which indicates the usage of the lexicographic method.

    However, I'm willing to give up "a little" (say 1%) of the primary goal to gain in the secondary objective. This is e.g. interesting when the Pareto front flattens out. I could of course do this by computing the whole Pareto front and selecting the weighting parameter coinciding with that idea of giving up something in one criterion, but the relaxed version of the lexicographic would be so much more convenient. Moreover, the goal is to have this in some automated solution and not in a one-time analysis.



    Sebastian Bayer

  • 4.  RE: Relaxed Lexicographic Optimization

    Posted 2 days ago
    Hello Sebastian,

    A way to improve your first proposal is using StartingPoint.
    So you would have something like:
    minimize f1
    get the value F1 (objective) and solution S (some variables)
    add a constraint f1 <= F1 * d (with d >=1 )
    add starting point S (set_starting_point(S))
    minimize f2​

    See set_starting_point.

    I hope this helps,

    Christiane Bracchi

  • 5.  RE: Relaxed Lexicographic Optimization

    Posted yesterday
    Hi Christiane,

    if I understand you correctly, you suggest something like this:

    from docplex.cp import model as cp
    model = cp.CpoModel()
    jobs = [cp.interval_var(size=5) for _ in range(10)]
    max_end_times = cp.max(cp.end_of(job) for job in jobs)
    sum_end_times = cp.sum(cp.end_of(job) for job in jobs)
    obj1 = cp.minimize(max_end_times)
    obj2 = cp.minimize(sum_end_times)
    sol1 = model.solve(TimeLimit=1)
    d = 1
    model.add(max_end_times <= sol1.get_objective_values()[0] * d)
    sol2 = model.solve(TimeLimit=1)

    If that is what you had in mind, then it seems to work in principle. However, I see some downsides to that. It's rather complicated to implement and more importantly, the optimization has to be split into phases. In each, CPO only focuses on one of the criteria instead of all of them simultaneously. If my understanding is correct this would seriously slow down the optimization process. Lastly, better solutions to previous objectives not part of the current model  are ignored. All this is elegantly avoided by minimize_static_lex. Too bad it has no relaxation parameter.

    I'd hoped for a simpler solution, but if that's not the case when I have to reconsider my approach to the problem.


    Sebastian Bayer

  • 6.  RE: Relaxed Lexicographic Optimization

    Posted 23 hours ago

    Hello Sebastian,

    Unless you specifically have different phases, having this relaxation means that the solution quality transitivity property is no longer maintained, and depending on the random seed or other factors, you can have different 'optimal' solutions produced, which is not desirable.

    As to your other point, note that in your above code, in the second optimization, you can still optimize using minimize_static_lex(obj1, obj2) to keep concentrating on obj1 in the second phase.  However, if the value of obj1 is already good (i.e. hard to improve further) doing this could be less efficient as CP Optimizer will still spend some of its time trying to improve obj1.

    You could also consider swapping obj1 and obj2 in the minimize_static_lex in the second model.

    Paul Shaw