Open question: tuning for hard instances

This is a rather short post on an open question, one I’ve been thinking about for some time. Automated tuning tools for MIP solvers have been around for the last few years. During that time, I have used these tools – and sometimes seen them used – to reduce run times for various types of models. They are worth the effort.

However, I have experienced and seen far less success when playing with very difficult MIP models, which are not intractable but take more than 8-10 hours to solve on a decent machine.  For these models, I would like to find parameter settings that allow the models to be solved. I can use tuning tools for that, but the time required to perform the tuning is prohibitive, as the solvers performs long tentative runs for each model. A huge portion of the scientific literature on parameter tuning is applied on models that solve relatively fast.

My go-by-default solutions would be:

  1. Open a cloud account (say Amazon) and run large tentative runs with various parameter settings;
  2. Disassemble the big instance into smaller instances, run the tuning on these smaller instances in the hope that these settings will perform better on the large model;

Any other suggestions would be greatly appreciated!

Comments

  1. Tobias Achterberg says:

    This is indeed a challenging task. One thing to consider is that MIP instances have many different sources of difficulty. If they solve very slowly because the LP relaxation is just difficult to solve, then there is not that much hope to get some reasonable tuning in a short amount of time, even if you throw an infinite amount of computing resources at the problem. But if the issue is more on the combinatorial side, i.e., the tree grows very large, then there is hope. Often a skilled user can already see after a few thousand nodes what goes wrong in the solving process and can then manually tweak parameters or improve the model formulation. The former is actually what the tuning tools, e.g., as implemented in CPLEX, try to do automatically. Moreover, if you have a cluster, a cloud, or a super-computer to your disposal, it is possible that the distributed MIP solver, in particular the distributed tree search (which you need to activate in CPLEX 12.6 through a non-default parameter) helps performance.
    Another possibility is to send your problem instances to your favorite solver vendor and let them try to find out what is going on. Even better, contribute it to MIPLIB, which could suddenly make it a big marketing endeavor to help you with your problem 😉

  2. If you’ve got access to a few machines with similar performance, you may want to try the Gurobi distributed tuning tool…

    http://www.gurobi.com/documentation/5.6/reference-manual/distributed_parallel_algor

    We’ve been a bit surprised by how quickly our behavior has changed inside the company – we now automatically grab at least 5 machines whenever we do any tuning.

Speak Your Mind

*