Performance variability: consequences

If you work on difficult mixed-integer programming (MIP) models, you have experienced performance variability. This refers as the impact on performance of seemingly performance neutral changes induced in the model or the solver. Part 1 of this series presented the problem and its relevance, and part 2 provided some readings related to the topic. This post presents three of the many consequences of this phenomenon.

You can’t tune a random seed

When a MIP solver finds it difficult to provide good solutions, one of the most common (and powerful) alternatives is to tune the solver’s parameters. Both Cplex and Gurobi provide automated tuning tools for such purpose. The problem is, you can’t tune a metric that is not supposed to impact performance so it performs better on average. Changing the random seed can improve or worsen solution times on a specific instance, but you will not find a good value for a broad range of instances. Some solver settings might reduce the severity of the problem, however. On many models, the impact of changing the random seed is much larger than tuning the solver’s parameters.

Algorithm development

The magnitude of performance variability brings challenges in terms of algorithmic development. PV introduces some serious noise in the measurement of algorithmic performance. It becomes more difficult to assess whether a new branching rule or a new heuristic improves a MIP code or not. The optimal MIP recipe was already known to be instance-dependent; now, it can also vary according to other parameters such as random seed or input order that are intractable in dimension. This implication was accurately summarized by Emilie Danna in 2008, but it is still very relevant. The same curse applies to tuning, as “good” parameter settings for solvers must hopefully be better over a broad range of instances and be consistent across random runs.

Problem tractability

MIP models are often sorted according to difficulty, such as “models that take less than 1 minute to solve”. In several cases, the impact of PV-induced changes in solving times is enough to make a model jump from a difficulty class to another. A model which would be “easy” if solved with seed n would become difficult if solved with seed n+1.

This has practical implications as well. Let’s say a customer needs a solution to a scheduling problem in 10 minutes at most. Solving times vary from 3 to 15 minutes depending on instance data and random seed. If solution quality increases steadily during the search, then a time limit can be enforced on the solver in order to grab the best solution in 10 minutes at most. However, it is not always the case. Below is the a figure showing the evolution of the gap over time. The blue line represents the optimality gap while the red line represents the value of the best integer solutions, which is what the user is interested in.

P64T28C5

The solver quickly finds a solution after a few seconds, and that solution stays for about 10 minutes. When it finds a much improved solution, the solver terminates very quickly. If we end the search after 10 minutes (600 seconds) we get a very poor quality solution.

Speak Your Mind

*