Closing the (MIP) gap – Part I

MIP Solvers are more powerful than ever. They mainly do two things: (1) find high-quality (optimal) solutions, and (2) proving the optimality of that solution. While in most cases performance is evaluated in terms of time-to-optimality, quite often a high quality solution is found in in only a fraction of the time needed to prove the optimum. Here I show that this can be experienced on capacitated lot-sizing models.

[This is post #2 inspired by a tweet from IBM’s Jean-François Puget].

Proving optimality takes time and effort

Several of the MIP models that I’ve run on CPLEX and Gurobi over the last few years took more than one hour to solve (capacitated lot sizing instances are relatively small but many still cannot be solved to optimality in 2 hours.) However, on several of these seemingly “difficult” instances, the latest versions of these two solvers often find a solution within 1% of the optimum in a few minutes; the rest was spent either in refining this solution or proving the optimality of that solution.

To assess whether my intuition was right, I computed some tests over 3 sets of capacitated lot-sizing instances (70 instances in total). While it takes on average 5.4 seconds to get a solution within 1% of the optimum (geometric mean), it takes more than 200 seconds on average to find one within 0.01% of the optimum (which is the gap used by default CPLEX for termination). It takes about 37 times more computing power to get from 1% to 0.01% than it is needed to find a solution within 1%. Analysis done with ticks rather than seconds finds a similar ratio of 36 to 1, and is depicted in the graph below:


Feasible solution is found quickly

These experiments also suggests two culprits responsible for this performance. Valid inequalities (cuts) help strengthen the relaxation and thus improve the lower bound. On 97% of the models, this feasible 1%-optimal solution is found through the use of a heuristic, rather than through solving the LP relaxation at a node. For about 50% of these models, the feasible solution within 1% of the optimum is found at the root node. The other 50% of models needed 1276 nodes on average to find a 1%-optimal solution. Finding a 0.01%-optimal solution required the exploration of 768k nodes (about 600 times more nodes than what is required to find a 1%-optimal solution).

Experiment details

These experiments were done with 70 instances from three sets, on a Intel XEON E5405 processor and with 8 GB of RAM. Default settings were used, except that the maximum number of threads allowed is set to 4.


  1. […] a previous post published in May, I showed that for several models, despite long times needed to reach the optimality threshold […]

Speak Your Mind


This site uses Akismet to reduce spam. Learn how your comment data is processed.